PACT - ICR - Baseline+MIS - Supporting Statement B - Rev'd - 10-24-12

PACT - ICR - Baseline+MIS - Supporting Statement B - Rev'd - 10-24-12.docx

Parents and Children Together (PACT) Evaluation

OMB: 0970-0403

Document [docx]
Download: docx | pdf

PACT – Part B

U.S. Department of Health and Human Services

Administration for Children and Families

Office of Planning, Research, and Evaluation

Aerospace 7th Floor West

901 D Street, SW

Washington DC 20447

Project Officers: Nancye Campbell and

Seth Chamberlain


Parents and Children Together (PACT) Evaluation (0970-0403):

OMB Supporting Statement for the Baseline Data Collection and Study MIS

Part B: Collection of Information Involving Statistical Methods

June 2012; Updated October 2012





1. Respondent Universe and Sampling Methods

The PACT Evaluation will focus on grantee programs purposively selected for the study. Up to 15 grantees are expected to be selected across the impact and implementation/qualitative only sites.

The grantees will be selected for their ability to address important research questions:

  • What are the experiences, issues, and challenges in designing, implementing, and operating comprehensive responsible fatherhood services for lower-income fathers?

  • What are the net impacts of the programs on relationship quality and stability, parenting attitudes and behaviors, measures of adult and child well-being, and economic outcomes?

  • What are the experiences of fathers who volunteer for the programs?

An impact study site will also need to meet the following three key criteria: (1) a random assignment evaluation can be successfully implemented at the grantee’s program—it must be possible to collect the necessary baseline information, to insert random assignment into the program’s intake procedures, and to form a control group that does not receive the same or similar services to those offered the program group; (2) the program must be able to enroll enough participants to meet sample size requirements; and (3) it must be plausible that the program can lead to impacts that are detectable with the planned sample size.

The implementation/qualitative-only grantees need not meet the criteria for being an impact study grantee, but will have to present some particular feature of program design or target population that warrants detailed study. Examples of program design features that may warrant detailed study include specific services provided, such as inclusion of a subsidized employment component or facilitation of non-custodial parent visitation, and program structure, such as involvement of different types of community partners to deliver key services. Examples of target populations that may warrant detailed study include incarceration status of men and extent of multiple partner fertility, i.e., adults who have had children with multiple partners

With regard to interpretation of results, we will take the characteristics of sites into account when interpreting and discussing results. Differences in the characteristics of sites may suggest plausible explanations for differences among sites in impacts. Thus, in our discussion of results, we will carefully hypothesize how these characteristics may have influenced outcomes.

Baseline Survey. The baseline survey will be used in both impact study sites and implementation/qualitative only sites. In the evaluation sites, eligible applicants will be asked to provide consent before participating in the baseline survey; therefore, the sample frame for the baseline survey includes all eligible applicants to the selected grantee programs who consent to participate in the study. The sample intake period in both impact and implementation/qualitative only sites is expected to be about two years.

Only fathers who have biological children under the age of 18 will be eligible for the study of RF programs. Applicants that do not meet this criterion might still be eligible to receive program services but will not be included in the study.

We estimate that about 421 fathers will apply for services in each program site during the two-year intake period. Of these 421 fathers, we estimate that about 400 will be eligible to participate in the program and the study, consent to participate in the study, and complete the baseline survey.

In impact sites, these 400 fathers who consent to participate in the study and complete the baseline survey will be randomly assigned. About 200 fathers will be assigned to the program group and 200 to the control group. Follow-up survey data collection will be attempted on all 400 fathers in the research sample. We expect that follow-up data will be successfully collected on 320 participant fathers (that is, the response rate will be about 80 percent).

Study MIS. The MIS will be used by all grantee programs selected to participate in the PACT impact and implementation/qualitative only sites. We estimate that a total of 90 staff members (six in each of 15 programs) will use the MIS. Staff will enter the MIS information on all fathers who consent to participate in the study. Staff will continue to enter information into the system on the estimated 4,500 program enrollees across the impact sites (study participants assigned to the program group) and the implementation/qualitative only sites.

2. Procedures for Collecting Information

a. Statistical Methodology, Estimation, and Degree of Accuracy

We expect to select no more than 15 grantee programs for the evaluations. The minimum number of programs anticipated to be included in the impact evaluation is four, though the actual number of programs will be determined after discussions with grantees and will be based on the estimated sample size that can be generated by the set of grantees. A sample of 400, which we expect to be the site-level sample size, is large enough to detect impacts on several key outcomes. As Table B.1 shows, with a single-site sample of 400 (200 in the program group and 200 in the control group) with baseline data and 320 with follow-up data, we are confident of detecting impacts on continuous outcomes that have an effect size of 0.20 or larger. This is sufficient to be able to detect impacts on fathers’ attitudes toward fatherhood. Cowan et al. (2009) found an effect size of 0.31 of a fatherhood program on a measure indicating the extent to which fathers viewed fatherhood as one of the main roles in their lives. A sample of 400 is also large enough to detect an impact on employment of 6 percentage points, an impact smaller than the one found in a pilot employment program for parents behind in their child support in four communities in New York (Lippold and Sorensen 2011).

Table B.1. Minimum Detectable Impacts for Key Outcomes

Sample Size
(Fathers)
(Baseline/Follow-up)

Continuous Outcome
(effect size)

Fathers’ Likelihood of Employment
(percentage points)
Control = 0.11a

Fathers’ Annual Earnings Control Group Std Dev = $14,717 b

400/320

0.20

0.06

$2,893

600/480

0.16

0.05

$2,362

800/640

0.14

0.04

$2,046

1,800/1,440

0.09

0.03

$1,364


Note: We assume an effective response rate of 80 percent, and a 50-50 split of sample members into program and control groups. All calculations assume a 95-percent confidence level, 80-percent power, and a one-tailed test. We assume an R-squared in the impact regression of 0.50.

a. Lippold and Sorensen (2011).

b. Building Strong Families Study.



However, a sample size of 400 per site may not be sufficient for subgroup analysis at the site level, as fewer than 400 fathers per site will belong to any particular subgroup. To conduct subgroup analysis, we will need to be able to pool samples from two or more sites (depending on the size of the subgroup). Pooling sites will also allow us to measure impacts on outcomes that are more variable, such as earnings, and will allow us to measure smaller impacts.

To allow for subgroup analysis, we anticipate including a set of grantees that offer strong programs and that, combined, will generate at least 1,800 sample members over two years for the impact evaluation. Past evaluations have demonstrated effect sizes of 0.1 or greater on relationship outcomes (Wood, 2010) and $1,308 in increased earnings (Schochet, 2006). A sample of 1,800 will position the evaluation to detect impacts of about this size. Furthermore, a sample of 1,800 will permit subgroup analyses of approximately 25 percent, or 400.

Based on previous experience, we are confident that an 80 percent response rate for the 12-month follow-up data collection can be achieved. The response rate for the 15-month follow-up survey for the Building Strong Families Study was 72 percent for fathers. We expect to achieve a higher response rate for fathers than in Building Strong Families for four reasons: (1) we are conducting the follow-up interview at 12 months after random assignment rather than 15 months after random assignment; (2) the baseline survey will be conducted by telephone by a trained interviewer who can collect more detailed and accurate contact information than the grantee staff members who administered the Building Strong Families baseline survey; (3) the PACT baseline survey will collect both email and social media addresses, which were not collected in the Building Strong Families Study; and (4) a reminder about the study and a request for updated contact information will be texted or emailed to respondents at about 6 months after random assignment (this is included as Appendix E).

b. Unusual Problems Requiring Specialized Sampling Procedures

There are no unusual problems requiring specialized sampling procedures.

c. Periodic Cycles to Reduce Burden

There will be only one cycle of baseline data and one cycle of follow-up data collection.

3. Methods to Maximize Response Rates and Data Reliability

Baseline Survey. In both impact and implementation/qualitative only sites, to maximize response rates and data reliability for the baseline survey effort, we will take the following steps:

  • Use a pretested questionnaire common to all sites. While the baseline questionnaire is unique to the current evaluation, a number of the questions included have been used successfully in prior studies. The form has been extensively reviewed by project staff and staff at ACF and reflects information obtained through cognitive interviews or pretests with 9 individuals who have backgrounds similar to anticipated PACT Evaluation participants. The same baseline survey will be used across all telephone interviewers and PACT program sites, ensuring consistency in data collection.

  • Use a straightforward, undemanding survey. The PACT baseline survey is designed to be easy to complete. The questions use clear and straightforward language. The average time required for the respondent to complete the survey is estimated at 30 minutes.

  • Administer the survey using computer-assisted technical interviewing (CATI). Administering the baseline survey via CATI will maximize the reliability of the data entered by telephone interviewers through skip-pattern logic and checks for consistency and validity.

  • Use trained interviewers. Respondents will be interviewed by trained members of Mathematica’s survey operations staff, many of which are experienced working on previous studies conducted for ACF. Most of these staff are familiar with similar questionnaire content. All survey staff assigned to the study will participate in both general training (if they are not already trained) and an extensive project-specific training. Interviewers will not work on the study until they have been certified as prepared. The project-specific training will include role playing with scenarios and other techniques to ensure that interviewers are ready to respond effectively to sample members’ questions. They will also focus on developing skills for securing respondents’ cooperation and averting and converting refusals.

  • Be able to administer the survey in multiple languages. During telephone contact, interviewers will identify Spanish-speaking respondents and connect them to speak with a bilingual interviewer. When necessary, translators for languages other than Spanish will be used. Mathematica employs staff who speak a wide range of languages and have experience conducting interviews in a number of languages.

  • Provide payments for survey participants. We suggest offering a modest $10 payment to baseline survey respondents to increase program applicants’ agreement to participate in the study and to reduce attrition for follow-up data collection. (This is discussed in greater detail in Question A9.)

We anticipate high response rates to the baseline survey. We anticipate that 80 percent of program applicants will agree to participate in the evaluation (consent) and that 100 percent of those who do consent will complete the baseline survey as part of the intake process. Likewise, we do not anticipate significant item nonresponse on the baseline survey based on prior experience asking similar questions with similar populations.

With regard to the follow-up survey, in the previous Building Strong Families study, we obtained a 72 percent response rate with fathers on the 15-month follow-up. Before calling sample members to complete the BSF 15-month survey, the contractor mailed letters to respondents’ homes reminding them about the study. The letter also included a toll-free number so respondents could call-in to complete the survey. If letters were returned marked “return to sender,” expert locating staff utilized database searches to find additional, updated contact information for sample members (address, telephone number) and additional contact was made by phone or letter based on the information obtained. As necessary, trained field interviewers were also used to locate respondents in person in the area of their last known residence. Once found, the field interviewer had a cell phone and could connect the father to the survey staff to complete the survey at that time. The PACT evaluation will utilize these same survey techniques in addition to using email and social media to contact sample members. We will describe our approach to conducting the follow-up survey in more detail in a subsequent ICR.

Study MIS. To maximize response rates and data reliability for the study MIS, we will take these steps:

  • Develop a user-friendly, flexible MIS. The MIS was specifically designed for use by grantee staff. As such, it will be extremely user-friendly and flexible to meet each site’s needs. By providing sites with this system, we standardize the information being collected from each site and improve the reliability of our implementation and impact components.

  • Include data quality checks in the MIS. The MIS will also ensure data reliability by instituting automatic data quality checks. For example, if grantee staff enter odd or unlikely values in a particular field, the system will prompt users to check the value. For some fields, the response values will be restricted; for others, grantee staff will be able to override the check.

  • Provide extensive training to grantee staff. To increase data quality, we will provide extensive training to system users prior to initial use. Initial training will be on site; follow-up training will be conducted using web and telephone conferences. Following training, PACT team members will conduct follow-up site visits to ensure compliance with procedures and be available by phone and email to assist users.

  • Monitor data quality. We will also monitor the data entered by grantees and provide feedback to grantees on their data quality. Initially, we will monitor data quality on a weekly basis, tapering that gradually to monthly monitoring as agencies demonstrate their ability to use the system correctly.

4. Tests of Procedures or Methods

Baseline Survey. In-person cognitive interviews and telephone pretests of the baseline survey have been conducted for four purposes: (1) to ensure that questions are understood and are consistent with the concepts they aim to measure; (2) to identify typical instrumentation problems such as question wording and incomplete or inappropriate response categories; (3) to measure the response burden; and (4) to check that there are no unforeseen difficulties in administering the instrument via telephone.

Cognitive interviews were conducted with four respondents (two African-American men and two Hispanic men) and telephone pretests were conducted with three respondents. The respondents selected for the cognitive interviews and telephone pretests were as similar to likely actual sample members as possible (we recruited pretest participants by contacting similar programs). Cognitive interviews revealed several questions that did not resonate with respondents or did not capture the intended information. Telephone pretest interviews were audio-taped or monitored to identify potential issues. As a result of the cognitive interviews and telephone pretests, we made changes to the survey instrument to improve the wording of the questions and their sequencing.

Study MIS. The automated version of the MIS system will be rigorously tested and evaluated by the development team to ensure proper functionality. Additionally, we will consult with practitioners on the usability of the system and engage these practitioners in the testing phase.

5. Individuals Consulted on Statistical Methods

Preliminary input on statistical methods was received from staff in the ACF Office of Planning, Research, and Evaluation as well as staff at Mathematica Policy Research, including the following individuals:

Ms. Nancye Campbell

7th Floor West

901 D Street, SW

Washington, DC 20447


Mr. Seth Chamberlain

7th Floor West

901 D Street, SW

Washington, DC 20447


Dr. Sheena McConnell

Mathematica Policy Research

1100 First Street, NE, #1200

Washington, DC 20024


Dr. Robert Wood

Mathematica Policy Research

P.O. Box 2393

Princeton, NJ 08543


Dr. Jane Fortson

Mathematica Policy Research

505 14th Street

Suite 800

Oakland, CA 94612



In the future, further input on analytic approaches may be sought from additional staff at these organizations and from outside consultants.



REFERENCES

Cowan, Philip A., Carolyn Pape Cowan, Marsha Kline Pruett, Kyle Pruett, and Jessie Wong. “Promoting Fathers’ Engagement with Children: Preventive Interventions for Low-Income Families.” Journal of Marriage and Family, vol. 71, August 2009, pp. 663–679.

Lippold, Kye, and Elaine Sorensen. “Strengthening Families Through Stronger Fathers: Final Impact Report for the Pilot Employment Programs.” The Urban Institute, October 2011.

Schochet, Peter Z., John Burghardt, and Sheena McConnell. “National Job Corps Study and Longer-Term Follow-Up Study: Impact and Benefit-Cost Findings Using Survey and Summary Earnings Records Data.” Princeton, NJ: Mathematica Policy Research, August 2006.

Wood, Robert G., Sheena McConnell, Quinn Moore, Andrew Clarkwest, and JoAnn Hsueh. “Strengthening Unmarried Parents’ Relationships: The Early Impacts of Building Strong Families.” Princeton, NJ: Mathematica Policy Research, May 2010.



5

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Title06997 PACT OMB
SubjectOMB Package Part A
AuthorSheena McConnell (formatted by Sheena Flowers)
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy