Part A Justification_renewal_7.12.16

Part A Justification_renewal_7.12.16.docx

Child Support Noncustodial Parent Employment Demonstration (CSPED)

OMB: 0970-0439

Document [docx]
Download: docx | pdf



U.S. Department of Health and Human Services

Administration for Children and Families

Office of Child Support Enforcement (OCSE) Aerospace 4th Floor

901 D Street, SW

Washington DC 20447

Project Officer: Elaine Sorensen

Child Support Noncustodial Parent Employment Demonstration (CSPED)



Extension

OMB No. 0970-439

OMB Supporting Statement for Implementation, Cost, and Impact Studies

Part A: Justification

April 12, 2016














1. Necessity for the Data Collection

The Office of Child Support Enforcement (OCSE) within the Administration for Child and Families at the U.S. Department of Health and Human Services seeks an extension through September 30, 2018 for an existing data collection called the Child Support Noncustodial Parent Employment Demonstration (OMB no. 0970-439; expiration date September 30, 2016). The original data collection approved by OMB included 10 ICs, but we are seeking an extension for only 8 of these ICs since two of the original ICs will not be used during the requested extension period. The 8 ICs included in this request have not been changed since their OMB approval. We are simply asking for an extension to continue using the currently approved 8 ICs. The burden hours in this request reflect the period of the extension, which we have estimated to be two years and three months, from July 1, 2016 (approximately 60 days after submitting the request) to September 30, 2018, when data collection for this demonstration will end.

Under the Child Support Noncustodial Parent Employment Demonstration (CSPED), OCSE has issued grants to eight state child support agencies to provide employment, parenting, and child support services to parents who are having difficulty meeting their child support obligations. OCSE has undertaken this demonstration to test whether state child support agencies can improve their effectiveness by providing these services. The overall objective of the CSPED evaluation is to document and evaluate the effectiveness of the approaches taken by the eight CSPED grantees. The study is using an experimental design in eight sites to compare outcomes for study participants who are being randomly assigned to treatment and control groups. The evaluation is being undertaken by the U.S. Department of Health and Human Services, ACF, OCSE, and its grantee the Wisconsin Department of Children and Families. The evaluation is being implemented by the University of Wisconsin-Madison and its partner, Mathematica Policy Research.

a. Study Background

The past several decades have witnessed sweeping changes in family structure. In 1980, 77 percent of children lived with two married parents; by 2010, this figure had fallen to only 66 percent U.S. Census Bureau 2013). Child support is a key resource for children living apart from one of their parents, and recent demographic and policy changes have made an effective child support system increasingly important. More than four in ten children are born to unmarried parents and many married couples with minor children go through divorce, making child support a potentially important source of support for most children at some point in their lives. Changes in the social safety net, which no longer includes an entitlement to cash assistance for low-income single parents, also make reliable child support increasingly important. At the same time, many parents who owe child support, including a disproportionate share of those whose children are living in poverty, have limited earnings and ability to pay support. Thus, a successful child support system must enforce and enable noncustodial parents’ contributions and also require effective policies to encourage noncustodial employment. The CSPED evaluation is designed to identify effective policy alternatives to address these needs.

The problem of parents not paying child support and the degree to which it disproportionately affects lower income families has been documented in earlier research (Heinrich et al. 2011; Manning and Smock 2000; U.S. Census Bureau 2013). This problem affects the state, the custodial parents/families and the parents who owe support. The debt is primarily owed to the states because the mothers who would receive the child support payments are themselves on welfare, which is paid by the state collecting the child support. Statistics suggest unemployed (and sometimes incarcerated or part time employed) noncustodial parents contribute the largest share of this debt and the likelihood of that debt being repaid is slim. Several studies on the effectiveness of programs aimed at addressing these issues appear to be inconclusive. In the state of Maryland, for example, an initiative aimed at employing the noncustodial parents was combined with a debt forgiveness effort, so the contribution of the employment component versus the debt forgiveness could not be clearly distinguished (Heinrich, Burkhardt and Shager, 2011).

Few rigorous studies of child support programs have been conducted to date. Currently, most states are conducting non-experimental evaluations of TANF-related policy changes (Meyer and Cancian 2002). The CSPED evaluation is using a rigorous, randomized controlled trial design to examine the effectiveness of child support programs providing employment and other services to unemployed parents behind in their child support in improving child support payments. The CSPED evaluation will yield information about effective strategies for improving child support payments by providing noncustodial parents employment and other services through child support programs. In addition, this evaluation will generate extensive information on how these programs operated, what they cost, the effects the programs had, and whether the benefits of the programs exceed their costs. The information gathered will be critical to informing decisions related to future investments in child-support-led employment-focused programs for parents who have difficulty meeting their child support obligations.

b. Overview of the Evaluation

The CSPED evaluation is a rigorous, innovative, and efficient study that will advance the field by building the knowledge base about effective strategies for supporting noncustodial parents in their roles as providers for their children. The evaluation’s two main components—the implementation and cost study and the impact study—will yield considerable information about not only whether a program is effective, but also about how it operates, why it may or may not be effective, and the challenges and opportunities it faces.

The purpose of the implementation and cost study is to provide a detailed description of how the programs are implemented, the contexts in which they are operated, their participants, their promising practices, and their costs and benefits. The key data collection activities of the implementation and cost study include: (1) conducting semi-structured interviews with program staff and selected community partner organizations, (2) conducting focus groups with program participants, (3) administering a web-based survey to program staff and community partners, and (4) collecting data on study participant service use, dosage, and duration of enrollment throughout the demonstration using a web-based Management Information System (MIS) to track program participation.

The goal of the impact study is to provide estimates of the effectiveness of the programs offered by the eight CSPED grantees. The evaluation will be based on a randomized controlled trial (experimental) research design in which program applicants who are eligible for CSPED services will be randomly assigned to either a treatment group that is offered CSPED program services or a control group that is not. The study MIS that will document service use for the implementation and cost study will also be used to randomly assign program applicants to the treatment and control groups. The impact study will rely on data collected from surveys of participants as well as administrative records from state and county data systems. Survey data will be collected twice from program applicants: at baseline and 12 months after random assignment.

c. Data Collection Activities Requiring Clearance (Current Request)

As of January 31, 2016, we have enrolled and conducted baseline interviews with 8,060 noncustodial parents and over 2,300 noncustodial parents have completed a 12-month follow-up survey. We are on schedule and expect to end the enrollment period in September 2016. However, we will require an extension beyond this date to continue to track service use in the MIS, complete 12-month follow-up surveys of those enrolled through September 2015, continue to collect administrative data, and complete site visit interviews with grantees. We anticipate that all activities will be completed by September 30, 2018.

This ICR extension requests clearance for the eight data collection protocols which will continue beyond the estimated renewal approval date of June 30, 2016, two of which will be used in the implementation and cost study and six of which will be used in the impact study as described below.

Implementation and Cost Study

Activities completed prior to estimated renewal date of June 30, 2016

The following components of the implementation and cost study have been completed or will be completed prior to the estimated renewal date:

  1. Program staff survey. The staff survey is a web-based and administered to program staff and staff at partner agencies working with CSPED participants. The survey is administered twice and will capture broader staff program experiences beyond the information garnered from the semi-structured interviews.

  2. Focus group guide. The focus group guide is being used to conduct focus groups with program participants at each site to gather information about their program experiences.

Activities requiring clearance beyond estimated renewal date of June 30, 2016

Clearance is requested to extend the following activities beyond the estimated renewal date:

  1. Staff interview topic guide (IC#1). The topic guide is being used to conduct semi-structured interviews with program staff and selected community partner organizations across the eight grantee sites during site visits conducted during the first and third year of program implementation.

  2. Study MIS to track program participation (IC#2). The study MIS is web-based and is used to track participation in the program. Information about services received by all program participants is entered into the system by program staff.

Impact Study

Activities requiring clearance beyond estimated renewal date of June 30, 2016

Clearance is requested to extend the following activities beyond the estimated renewal date:

  1. Introductory script read by program staff (IC#3). The script is used by grantee site staff to introduce applicants to the CSPED program and the study and to address questions they may have about the study.

  1. Introductory script heard by program applicants (IC#4). Program applicants hear the program staff use the introductory script to introduce applicants to the CSPED program and the study and to address questions they may have about the study.

  2. Baseline survey (IC#5). The baseline survey is administered to program applicants using Computer-Assisted Telephone Interviewing (CATI). Grantee staff provide a telephone for program applicants to use to call an interviewer employed by the contractor. The interviewer begins by describing the study to the applicant and asking for the applicant’s consent to participate in the study. Once sample intake is complete, a copy of the consent statement is provided to the sample member (Supplement A). If the applicant agrees to participate in the study, the interviewer administers the baseline survey. The CSPED baseline survey draws heavily from the OMB-approved Parents and Children Together (PACT) baseline data collection instrument. A question-by-question justification for the items included in the CSPED baseline survey is also presented in Supplement B.

  3. Study MIS to conduct random assignment (IC#6). The study MIS includes functions to conduct the random assignment of applicants at all evaluation sites.

  4. Protocol for collecting administrative records (IC#7). This protocol is used to extract information from state and county databases on participants’ child support obligations and payments, Temporary Assistance for Needy Families (TANF), Supplemental Nutrition Assistance Program (SNAP), and Medicaid benefits, involvement with the criminal justice system, and earnings and benefit data collected through the Unemployment Insurance (UI) system.

  5. 12-Month follow-up survey (IC#8). The follow-up survey is administered to study participants 12 months after random assignment, using Computer-Assisted Telephone Interviewing (CATI), with in-person locating as needed. After an advance letter is mailed to all study participants, telephone interviewers employed by the contractor call study participants using the contact information provided during the baseline survey. If the participant agrees to take part in the follow-up data collection, the interviewer administers the 12-month follow-up survey. The survey collects data on the involvement of non-custodial parents with their children and the mothers/fathers of their children; the quality of these relationships; experiences with the child support system; employment, earnings, and job readiness; criminal justice involvement; mental health and well-being; and the receipt of services through CSPED programs and other programs that provide similar services. The data collected by this survey will be used to estimate the impacts of CSPED programs. The 12-month follow-up survey draws heavily from the OMB-approved Parents and Children Together (PACT) follow-up data collection instrument. A question-by-question justification for the items included in the CSPED follow-up survey is presented in Supplement C.

d. Legal or Administrative Requirements that Necessitate the Collection

There are no legal or administrative requirements that necessitate the collection. OCSE is undertaking the collection at the discretion of the agency.

2. Purpose and Use of the Information Collection

The data collected through the instruments included in this ICR will be used to learn about the approaches that will be implemented by the eight CSPED grantees to provide employment supports and other services to noncustodial parents who face barriers to employment and experience difficulties in paying child support. The information to be obtained through the CSPED evaluation is critical to understanding the CSPED programs—the services they provide, the experiences of their participants, their effectiveness at improving outcomes for noncustodial parents and their children, and their ability to improve the performance of the Title IV-D Child Support Program. The data collected in the CSPED evaluation can also be used to inform decisions related to policy and programmatic improvements to the Title IV-D Child Support Program. If the information collection extension requested by this ICR is not conducted, policymakers and providers of employment and child support programs will lack high-quality information on the effects of the programs, as well as descriptive information that can be used later to refine the operation of the programs and better meet child support performance goals. Details on the purpose and use of the information collection for each of these studies are provided below.

  1. Implementation and Cost Study

The goal of the implementation and cost study is to provide a detailed description of the programs--how they are implemented, their participants, the contexts in which they are operated, their promising practices, and their costs and benefits. These detailed descriptions will assist in interpreting program impacts, identifying program features and conditions necessary for effective program replication or improvement, and carefully documenting the costs of delivering these services.

  • Staff interview topic guide. This guide is used to collect information from program staff on the plans and goals for the program, the staffing structure, recruitment and engagement strategies, services offered, costs, enrollment and receipt of services, and characteristics of the community.

  • Study MIS to track program participation. The data collected through the study MIS provide critical information on program participation. Program staff are asked to report on all services provided to program participants on an ongoing basis. Historically, research indicates that many social services programs find it difficult to engage and retain participants—many individuals either never begin participating after enrollment or leave the program before it is completed. Hence, it is important to collect information both on what services the grantee site offers and what services the participants actually receive.

  1. Impact Study

The purpose of the impact study is to provide rigorous estimates of the effectiveness of the eight CSPED programs using an experimental research design. Program applicants who are eligible for CSPED services will be randomly assigned to either a program group that is offered program services or a control group that is not.

  • Introductory script. The grantee staff use this script to describe the study to the applicant and explain why they will be asking him or her to speak with an interviewer over the telephone.

  • Baseline Survey. Data collected through the baseline survey are crucial for the impact study, and will provide critical information on both study participants served and those who are not served by grantee programs. In particular, these data will be used for the following purposes:

  1. Describing the characteristics of participants. The baseline survey gathers descriptive information on study participants at baseline to make it possible to identify the characteristics of noncustodial parents who apply to grantee programs. In addition to basic demographic information, these data will provide information about the types of challenges faced by noncustodial parents who enroll in grantee programs (for example, education level, employment status, housing stability, etc.). These data will also be used to construct survey nonresponse weights that adjust for potential bias that might arise from follow-up survey nonresponse, and to control for baseline characteristics in estimating program impacts.

  1. Identifying subgroups of interest. Baseline data will be used to identify subgroups for which impacts may differ—for example, it may be that impacts are larger for noncustodial parents who see their children more often than for those who seldom see them.

  2. Collecting information that can explain variation in outcomes. Impact estimates obtained from the differences between mean outcomes of treatment group members and mean outcomes of control group members are unbiased. However, impact estimates obtained using a regression model with covariates that explain some of the variation in outcomes at follow-up, such as the outcomes assessed at baseline, can improve the precision of the estimates. Hence, the baseline survey includes baseline measures of key outcomes that will be measured again on the follow-up survey.

  3. Checking that the treatment and control groups are on average similar at baseline. Information on the characteristics of study participants can be used to check the similarity of the treatment and control groups. Although random assignment produces similar groups, on average, baseline data will be used to verify program-control equivalence for the full research sample and for the sample of respondents to the follow-up survey.

  4. Identifying and tracking study participants. Identifying information includes the study participant’s complete name, sex, date of birth, mailing address, and Social Security number. This information is needed to match with other administrative data (for example, wage/earnings data, child support data) to assess the impact of the programs on these key outcomes. In addition, personal information along with information on sample members’ telephone numbers, email addresses, social network information, and contact information for up to three relatives or friends is needed to facilitate locating study participants for follow-up survey data collection. Accurate and detailed locating information is essential for achieving high survey response rates.

  • Study MIS to conduct random assignment. The CSPED impact evaluation is a randomized controlled trial (experimental) evaluation. The study MIS, overseen by the evaluation contractor, determines random assignment after participants have consented and completed the baseline survey. Random assignment is the core of an experimental impact evaluation since it creates a control group that is similar on all baseline characteristics to treatment group participants. For this reason, an experimental evaluation is often considered the most rigorous program evaluation.

  • Protocol for collecting administrative records. Data extracted from administrative records are essential to the impact study because information about study participants’ court records, child support payments, unemployment benefits, and services received that are not part of the CSPED program will be used to demonstrate the contrasts between the CSPED program group (the treatment group) and the comparison (control) group.

  • 12-month follow-up survey. Data collected through the follow-up survey provide critical information for the impact study by yielding outcome data for both study participants served by grantee programs as well as study participants not served by grantee programs. These data will be used to estimate the impacts of CSPED programs by identifying differences between the CSPED program (treatment) group and the comparison (control) group.

3. Improved Information Technology to Reduce Burden

The CSPED evaluation uses multiple methods to collect study information. Web based applications are used for the survey of program staff, the MIS, and collection of administration data. CATI are used for the baseline and follow-up surveys of participants. Semi-structured interviews and focus groups do not make use of information technology to reduce burden.

a. Implementation and Cost Study

Staff interview topic guide. These semi-structured interviews are conducted in person or by telephone by the data collection team, without the use of information technology to reduce burden.

Study MIS to track program participation. The study MIS is a web-based application providing easy access while maintaining the security of the data. The web-based application allows sites to access the MIS without purchasing or installing additional software or changing the configuration of their computers. The system can be accessed from any computer, allowing for ease of entry, while the data are housed on secure servers behind the contractor’s firewall, thereby maintaining data security. The system has been designed with use by the grantee staff in mind, and based on experience from prior studies with similar types of service providers. As such, it is flexible, easy-to-use, and include navigational links to relevant fields for each type of entry to reduce burden on grantee site staff and increase the quality and quantity of data collected. The system is designed for multiple users at each organization and will include options for varying levels of system access depending on users’ access needs. For example, administrators or supervisors have the greatest rights within the system, having the ability to create new users, assign program participants to staff members, and review all activity for the organization. Staff providing direct services to study participants have the ability to record and review information about participants assigned to their caseload. The various levels of system access allow for streamlining of information; limiting full system access to a small set of staff members promotes increased data security and greater data quality.

b. Impact Study

Introductory script. This script does not lend itself to the use of improved informational technology such as computerized interviewing.

Baseline and 12-month follow-up surveys. The baseline and 12-month follow-up surveys are conducted using CATI. CATI is a good method for administering interviews with questions with complex skip patterns, the need for interviewer probes, and large numbers of respondents. CATI reduces respondent burden by automating skip logic and question adaptations and by eliminating delays caused when interviewers must determine the next question to ask. CATI is programmed to accept only valid responses based on preprogrammed checks for logical consistency across answers. Interviewers are thus able to correct errors during the interview, eliminating the need for burdensome and costly call-backs to respondents.

Study MIS to conduct random assignment. Use of information technology and burden reduction through the study MIS is described in the implementation and cost study description presented in the previous section.

Protocol for collecting administrative records. The CSPED evaluation team requests administrative records from state and county agencies to extract information from their databases to gather outcome data for the impact study. The evaluation team has set up a secure web-based system for transferring these data.

4. Efforts to Identify Duplication and Use of Similar Information

The CSPED Evaluation does not require the collection of information that is available from alternative data sources.

None of the instruments ask for information that can be reliably obtained through administrative data collection. For example, the baseline and 12-month follow-up surveys ask study participants to provide limited information on formal child support at two points in time, as administrative data in one state do not consistently capture child support orders in other states. The baseline and 12-month follow-up surveys ask study participants to report on informal contributions (monetary and in-kind support) that would not be reflected in administrative data. In addition, information on quarterly earnings (reported to the state unemployment insurance agency) will be obtained from administrative data; the baseline and 12-month follow-up surveys ask for earnings in the past month to capture more recent earnings and earnings that may not have been reported to the unemployment agency. Though criminal history information within a state is potentially available through administrative sources, that information is gathered through the baseline survey because not all states allow administrative data access for research purposes, because state records do not always record criminal activity in other states or at the county level, and because in some states administrative data lack key information. Nevertheless, participants will be asked to provide consent for the collection of administrative data on criminal background if that is deemed necessary at a later time.

Child support programs do not typically collect all the information that will be gathered by the study MIS. For instance, information required for intake and random assignment is not likely to be available from other sources. Likewise, child support programs often do not have an existing MIS that systematically tracks the information to be included in the service receipt section of the study MIS.

No program participant or staff member will be asked for the same information more than once. For example, the staff will not be asked during the semi-structured interviews any questions that they are asked on the staff survey.

5. Impact on Small Businesses or Other Small Entities

No small businesses are expected to be involved in data collection. Nonetheless, instruments have been tailored to minimize burden and collect only critical evaluation information.

6. Consequences of Not Collecting Information or Collecting Information Less Frequently

Not collecting information for the CSPED evaluation overall would limit the government’s ability to document the kinds of activities implemented with federal funds and its ability to measure the effectiveness of such activities. In particular, the CSPED evaluation represents an important opportunity for OCSE to learn about how to improve child support program performance and increase the reliable payment of child support through the provision of enhanced child support services and employment programs for noncustodial parents and their children. If the information collection extension requested by this clearance package is not conducted, policymakers and providers of these programs will lack high-quality information on the impacts of the programs, as well as descriptive information that can be used later to refine the programs.

  1. Implementation and Cost Study

Staff interview topic guide. Without collecting information by conducting interviews with program staff, the study would not have complete information about the implementation of the CSPED programs. We propose collecting data from interviews with CSPED program staff twice, during a site visit conducted early in implementation and another site visit conducted late in implementation. The first visit focuses on understanding program design, while the later visit focuses on implementation experiences. In addition, prior experience (Dion et al. 2010) has illustrated that service delivery programs similar to those delivered by CSPED grantees modify their implementation approach over time in response to their early experiences, so collecting these data twice will capture those changes and will also capture staff feedback about the lessons they learned along the way.

Study MIS to track program participation. Staff are asked to enter information about services offered to participants (e.g., individuals assigned to the next parenting workshop) and their actual participation and attendance throughout the period of the study. Without information on service receipt, we will not be able to describe the services offered to participants by CSPED programs and the extent to which program participants received these services. These data are critical to the implementation analysis and to interpreting the findings from the impact analysis.





  1. Impact Study

Introductory script. This script is necessary to ensure that program staff provide program applicants with accurate information about the study and explain why the applicant needs to talk with an interviewer employed by the evaluation contractor.

Baseline Survey. Without collection of detailed contact information on study participants at baseline, the ability to track study participants over a 12-month follow-up period would be limited. This would likely lead to a lower response rate and a greater risk that the impact estimates will be biased by nonresponse. The lack of baseline information would also limit the evaluation contractor’s ability to describe the population of CSPED program participants and would limit the analysis of program impacts on subgroups, thereby limiting the ability to determine the groups for which the program is most effective. Without data from the baseline survey, baseline information cannot be included as covariates in the impact analyses which will render the impact estimates less precise and will make small impacts less likely to be detected. Also, adjustments for nonresponse to the follow-up survey would have to be based on administrative data, which are much more limited. In addition, without baseline information on factors that could predict program participation, it would not be possible to measure the impact of programs on receiving services, rather than being offered services (for more detail, see Section #16 below).

Finally, the baseline survey yields data that are vital for ensuring that random assignment is properly implemented. In particular, without data from the baseline survey, it would not be possible to test whether the program and control groups were equivalent at baseline on many key measures (such as those not covered by administrative data). Baseline surveys will be collected only once; thus, no repetition of effort is planned.

Study MIS to conduct random assignment. Information entered by grantee site staff at intake is collected once, prior to submitting an applicant’s case for random assignment. Without entry of this information, we would not be able to check if the applicant is already a member of the evaluation sample, an issue that ensures the integrity of random assignment. In addition, staff predictions of likely program participation would be missing, making it more difficult to estimate impacts on those who actually participated.

Protocol for collecting administrative records. Without administrative data, information about public benefits received, criminal justice involvement, child support payments history, unemployment benefits information, and employment history both before and after study enrollment would not be collected. This information is crucial to differentiating program effects between treatment and control groups and to identifying the net costs of the program.

12-month follow-up survey. The 12-month follow-up surveys collect and update information that is similar to the information collected on the baseline survey. Collecting data on an outcome measure, such as parental involvement, both before random assignment (i.e., at baseline) and also at follow-up, increases the precision of the estimates of the impacts of the program on those outcomes.

7. Special Circumstances

There are no special circumstances for the proposed data collection.

8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

In accordance with the Paperwork Reduction Act of 1995, the public was given an opportunity to review and comment through the initial 60-day Federal Register Notice, published on April 27, 2016 (81FR 81, document number 2016-09803, pp. 24817-24818). The notice provided 60 days for public comment. We received one request for a copy of the proposed information collection associated with CSPED.

9. Explanation of Any Payment or Gift to Respondents

We propose to continue to provide a payment of $25 to study participants for the 12-month follow-up survey and $10 for the baseline survey. This amount is unchanged from our previously approved OMB submission. Study participants were offered $20 to take part in focus groups.

We propose to make respondent payments for three reasons:

  1. To increase response rates. The knowledge that they will be paid for completion is expected to increase respondents’ likelihood of spending the time completing the data activity. Research has shown that respondent payments are effective at increasing response rates for populations similar to participants in child support and employment programs—people with lower educational level (Berlin et al. 1992) and low-income and nonwhite populations (James and Bolstein 1990). Singer and Kulka (2002) showed that respondent payments reduced differential response rates and hence the potential for nonresponse bias. The suggested payment to complete the focus groups is a higher incentive than the payment suggested for completing the baseline survey due to the increased burden of this request and to facilitate recruitment.

  1. To reduce attrition for follow-up data collection. In longitudinal studies, providing an incentive for earlier surveys may contribute to higher response rates for subsequent surveys (Singer et al. 1998). Therefore, providing a modest payment at baseline may reduce attrition for follow-up data collection.

  2. To gain study participants cooperation in data collection activities. Providing a modest payment to all study participants—including those who are assigned to the control group—will show to participants that their time is valuable. The suggested payment to complete the 12-month follow-up survey has been set as a higher incentive than the payment suggested for completing the baseline survey to maximize the incentives to participate in the 12-month follow-up survey.

10. Assurance of Confidentiality Provided to Respondents

The consent statement and all other materials provided to study participants and program staff includes assurances that the research team will protect their privacy to the fullest extent possible under the law. Before the baseline survey is administered, the interviewer reads a consent statement, which includes a pledge that responses will be kept confidential and reported in a manner that will not identify individual respondents (see page ii of (IC #5). In addition, a written consent statement is distributed to participants by grantee site staff at the time of study enrollment. Consent is provided verbally by the participant after the consent statement has been read to the participant by the interviewer. The specific consent language remains unchanged from our original submission.

Study participants are also read a statement prior to participating in the 12-month follow-up survey which reminds them that their answers will be kept private and not reported in a manner that will identify them (see IC #8). Several specific measures are taken to protect respondent privacy throughout the study:

  • Training interviewers in confidentiality procedures. The oral consent process and baseline interview are administered by telephone interviewers at the University of Wisconsin Survey Center (UWSC), who remotely access Mathematica’s CATI system via secure network connection. The 12-month follow-up interview is administered by telephone interviewers at the University of Wisconsin Survey Center (UWSC). Interviewers are seated in a common supervised area. As part of the telephone interviewers’ introductory comments, study participants are told that their responses will be protected and that they will have the opportunity to have their questions concerning the study answered by the interviewer. Interviewing staff at the UWSC receive training that includes general UWSC security and confidentiality procedures as well as project-specific training that includes explanation of the highly confidential nature of this information, instructions not to share this or any other personally identifiable information (PII) with anyone not on the project team, and warnings about the consequences of any violations. After receiving training, these staff members sign confidentiality and nondisclosure agreements.

  • Using CATI for consent, the baseline survey, and the 12-month follow-up survey. Administering consent, the baseline survey, and the 12-month follow-up survey via CATI eliminates security risks related to shipping hard-copy forms containing PII to the evaluator. Additionally, UWSC interviewers logging in remotely to the Mathematica network using a secure network connection enables data to be stored on servers behind Mathematica’s secure firewall to minimize security risks.

  • Restricting and logging access to the sample management system (SMS). Some data elements from the baseline survey data are entered into an SMS to help with locating sample members for the follow-up survey. This is a SQL server database housed on an encrypted server. A hierarchical architecture is used to assign user rights to specific individuals who will be able to access the system and enter information only at their own location. All activity in the system will be logged.

  • Restricting access to the study MIS. Data collected through the study MIS are housed on secure servers behind Mathematica’s firewall. Access to the study MIS are restricted to approved staff members assigned a password with permission from the study director.

  • Using de-identified data for all focus group reports. Any data elements used for recruitment of focus group participants, such as name and telephone number, will be destroyed after completion of the focus groups. Interview transcripts and resultant reports will not identify respondents by name.

Data security. In addition to these study-specific procedures, the evaluator has extensive corporate administrative and security systems to prevent the unauthorized release of personal records, including state-of-the-art hardware and software for encryption that meet federal standards, and other methods of data protection (e.g., requirements for regular password updating), as well as physical security that includes limited key card access and locked data storage areas.

The contractor has a secure server for online data collection (including data collected through the MIS and through the staff web survey), utilizing its existing and continuously tested web survey infrastructure. This infrastructure features the use of HTTPS (secure socket, encrypted) data communication; authentication (login and password); firewalls; and multiple layers of servers, all implemented on a mixture of platforms and systems to minimize vulnerability to security breaches.

Hosting on an HTTPS site ensures that data are transmitted using 128-bit encryption, so that transmissions intercepted by unauthorized users cannot be read as plain text. This security measure is in addition to standard password authentication that precludes unauthorized users from accessing the web application.

The contractor has established data security plans for handling all data during all phases of survey execution and data processing for the surveys it conducts. The contractor’s existing plans meet the requirements of U.S. federal government agencies and are continually reviewed in the light of new government requirements and survey needs. Such security is based on (1) exacting company policy promulgated by the highest corporate officers in consultation with systems staff and outside consultants, (2) a secure systems infrastructure that is continually monitored and evaluated with respect to security risks, and (3) secure work practices of an informed staff that take all necessary precautions when dealing with private data.

11. Justification for Sensitive Questions

a. Implementation and Cost Study

The instruments associated with the implementation study have no sensitive questions. Most focus on the experiences of program and community organization staff with their jobs of delivering services at the program or elsewhere in the community. The program staff and community partner survey ask case managers and partners to rate aspects of their working relationship, but the data from this instrument will be aggregated for analysis and individual responses will not be shared with the other half of the case manager/community partner dyad. Focus groups with program participants ask about those respondents’ impressions of and experiences with the program, but do not make special requests for personal information.

  1. Impact Study

Some sensitive questions are necessary in a study of programs designed to affect personal relationships and employment. Prior to starting the baseline and 12-month follow-up surveys, all respondents were informed that their identities will be kept private and that they do not have to answer questions that make them uncomfortable. Table A.1 describes the justification for the sensitive questions included in the baseline I and the follow-up surveys. Although these questions are sensitive, they have commonly, and successfully, been asked of respondents similar to those who will be in this study (for example, in the Fragile Families and Child Wellbeing Study, the Building Strong Families Study, the Early Head Start Research Evaluation Project, and the PACT Evaluation).



Table A.1. Justification for Sensitive Questions – Baseline Survey, Follow-Up Survey and Study MIS

Question Topic

Justification

Respondent Social Security number

Baseline survey question A3

12-month follow-up survey question i6

The respondent’s Social Security number is essential for this evaluation for four reasons. First, it will be used to collect administrative data on the respondents. The Social Security number will allow us to obtain important outcome data on the respondent from child support agencies and state and county databases. Second, Social Security numbers will also be used to collect information on the location of the study participant for the follow-up data collection. Third, these numbers will be used as an identifier to link the information collected in the study MIS with the survey data and will allow the study MIS to check whether the person has already been randomly assigned. Fourth, the last four digits of the Social Security Number will be used to verify the respondent’s identity at the time of the baseline and follow-up surveys.

Current Romantic Relationships

Baseline survey questions D19 – D25

Follow-up survey questions C25 – C31

Information on noncustodial parents’ current romantic relationships is important for understanding the complex structure of their households and the demands on their time and resources. These relationships could influence the time and resources noncustodial parents devote to their children. These questions have been asked successfully on other large-scale survey efforts involving low-income families, such as the Building Strong Families evaluation.

Earnings

Baseline survey questions E1 - E9

12-month follow-up survey questions E1 – E13


A key goal of child support programs is to improve noncustodial parents’ economic stability. The outcomes of a parent employed when entering the program may be very different than those of a parent who enters without employment. The survey asks whether the respondent worked in the past month and, if so, the amount he or she earned in the last month from formal and informal jobs. This question has been asked successfully in many surveys including the Building Strong Families survey (Wood et al. 2010).

Barriers to employment

Baseline survey question E10

12-month follow-up survey question E15


Noncustodial parents’ barriers to employment are expected to be key predictors of similar economic stability outcomes at follow-up. The survey asks how difficult issues such as problems getting to work, not having necessary skills, having to take care of a family member, not having a place to live, problems with alcohol or drugs, trouble getting along with others, physical health, and having a criminal record have made finding or keeping a job for noncustodial parents in the past year. This question has been asked successfully in the Fragile Families and Child Well-Being Study (McLanahan 2009) and the Building Strong Families survey (Wood et al. 2010).

Symptoms of depression

Baseline survey question F4

12-month follow-up survey question G1

Parental depression has been shown to have adverse consequences for labor market and child outcomes (Alexandre and French 2001, Downey and Coyne 1990, Gelfand and Teti 1990, Marcotte et al. 1999). To measure depressive symptoms, we will use eight items from the Patient Health Questionnaire (PHQ-9), which was designed as a diagnostic instrument for depression but can also be used to measure subthreshold depressive disorder in the general population (Martin et al. 2006). The PHQ-9 has been shown to be reliable and valid in diverse populations and has been used in clinical settings to measure symptom improvement and monitor treatment outcomes (Kroenke, Spitzer, and Williams 2001; Löwe et al. 2004). Findings from telephone administrations of the instrument have been shown to be similar to in-person assessments (Pinto-Meza et al. 2005). The PHQ-8 includes eight of the nine items from the PHQ-9; it has been shown to be a useful measure of depression in population-based studies (Kroenke et al. 2009).

Involvement with the criminal justice system

Baseline survey questions F7 - F11

12-month follow-up survey questions F1 – F8


Recent research suggests that a history of incarceration and involvement with the criminal justice system may be fairly common among parents in the CSPED target population (Pearson et al. 2011). Parental incarceration has major negative effects on child and family well-being, reducing the financial support and other types of support the parents can provide to their children and families. Similar questions have been included in other large national studies, such as the Fragile Families and Child Wellbeing Study, the National Job Corps Study, and the Building Strong Families Study. In the Building Strong Families survey, nonresponse was less than 1 percent for these items.

Methods of discipline

12-month follow-up survey question C27


These items measure the use of mild to harsh disciplinary practices. These measures will enable us to determine whether the CSPED sites’ emphasis on parenting skills leads to a reduction in the use of harsh discipline techniques among participants. Most of these items are drawn from the Conflict Tactics Scale: Parent Child Version (CTSPC; Straus et al. 2003). The CTSPC is well validated, shown to have good internal consistency, and has been used in large-scale longitudinal surveys, including the National Survey of Child and Adolescent Well-Being.

12. Estimates of Annualized Burden Hours and Costs for Activities Continuing Beyond Expiration Date

We will need to continue some CSPED activities after the renewal request is approved, which we have estimated to be June 30, 2016. Specific activities are described below. The estimated reporting burden and cost for completion of these data collection instruments is presented in Tables A.2 through A.4.

Implementation and Cost Study

Table A.2 summarizes the proposed burden and cost estimates for the use of the implementation and cost study instruments in the eight evaluation sites. The total estimated cost figures are computed from the total annual burden hours and an average hourly wage for staff (using the $27.86 rate described above) and program applicants (using the $7.25 hourly wage described above). Figures are estimated as follows:

  • Semi-structured interview with grantee staff and community partners. We conducted interviews with 120 grantee site staff members and community partners (15 per site across 8 sites) in year 2 and expect to conduct another 120 interviews between July 2016 and September 2018. We expect these meetings, which will involve a semi-structured interview about experiences with the program, to last approximately 1 hour per interview. Thus, the total burden for grantee site staff and community partners to complete this activity is 120 hours (120 staff members participating in 1 meetings of 1 hour in duration each). The annualized burden is estimated over two years and three years (i.e. the estimated extension period) and is 53 hours.

  • Study MIS to track program participation. This burden is based on the number of computer entries grantee site staff will make as they track participation by participants. We estimate that there will be 6,000 participants in the CSPED program1 and that 200 staff members—25 in each of eight sites—will collect MIS data on these participants. We estimate that there will be 50 MIS entries per participant to record program participation, which will result in a total of 300,000 MIS entries (6,000 x 50 = 300,000) about participation over the course of four years (October 1,2013 to September 30, 2017). Each staff member will make 1,500 entries over the course of 4 years (300,000 ÷ 200 = 1,500) and we estimate that each entry to document program participation will take two minutes (or 1/30 hours) on average2. We estimate the total burden to complete this activity between July 1, 2016 and September 30, 2018, is 3,125 hours (200 x 468.75 ÷ 30). The annualized burden is estimated over two years and 3 months (i.e. the estimated extension period) and is 1,390 hours (3,125 ÷ 2.25).

Table A.2. Estimate of Burden and Cost for the CSPED Evaluation Implementation and Cost Study for Activities Continuing Beyond June 2016

Instrument

Total Number of Respon-

dents

Number of Responses per Respon-

dent

Average Burden Hours per response

Total Burden Hours

Annual Number of Respon-

dentsa

Total Annual Burden Hoursa

Average Hourly Wage

Total Annualized Cost

Staff interview topic guide

120

1

1

120

53

53

27.86

$1,477


Study MIS to track program participation

200

468.75

0.0333

3,125

89

1,390

27.86

$38,725

Total

320



3,245

142

1,443


$40,202

a Burden estimates are annualized over 2.25 years.

Impact Study

Table A.3 summarizes the proposed burden and cost estimates for the continued use of the impact study instruments across the eight evaluation sites. The total estimated cost figures are computed from the total annual burden hours and an average hourly wage for staff (using the $27.86 rate described above) and program applicants (using the $7.25 hourly wage described above).



Table A.3. Estimate of Burden and Cost for the CSPED Evaluation Impact Study for Activities Continuing Beyond June 2016

Instrument/

Respondent

Total Number of Respon-

dents

Number of Responses per Respon-

dent

Average Burden Hours per response

Total Burden Hours

Annual Number of Respon-

dentsa

Total Annual Burden Hoursa

Average Hourly Wage

Total Annualized Cost

Introductory script:









Grantee staff

120

9

0.1667

180

53

80

$27.86

$2,229

Program

applicantsb

1,050

1

0.1667

175

467

78

$7.25

$5,665


Baseline survey

1,000

1

0.5833

583

444

259

$7.25

$1,878










Study MIS to conduct random assignment

120

9

0.1667

180

53

80

$27.866

$2,229


Protocol for collecting administrative records

32

1

8

256

14.2

114

$27.86

$3,176


12-month follow-up survey

1,476

1

0.75

1,107

656

492

$7.25

$3,567

Total

3,798



2,481

1,687

1,103


$18,744

a Burden estimates are annualized over 2.25 years.

b Five percent of program applicants are not expected to agree to participate in the study; thus there are 5% more program applicants than study participants.



Table A.3 figures are estimated as follows:

  • Introductory script read by grantee staff. We expect 120 staff members total–15 in each of the 8 sites–to provide information about the CSPED study to 12,600 program applicants3 over the course of three years (October 1, 2013 to September 30, 2016). Each staff member will conduct 105 meetings (12,600 ÷ 120 = 105) over the course of three years. These meetings, which involve explaining the program services and the fact that the applicant will be randomly assigned to be eligible or not eligible for services, last approximately 0.1667 hours. We estimate the total burden to complete this activity between July 1, 2016 and September 30, 2016, when enrollment ends, to be 180 hours (120 staff members holding 9 meetings of 0.1667 hours duration). The annualized burden over two years and three months (i.e. the estimated extension period) is 80 hours.

  • Introductory script heard by program applicants. We expect 12,600 program applicants total to hear program staff read the introductory script that provides information about the CSPED program. These meetings, which involve explaining the program services and the fact that the applicant will be randomly assigned to be eligible or not eligible for services, last approximately 0.1667 hours. Each program applicant hears this information once. This activity will end September 30, 2016 when enrollment is completed, thus we estimate the total burden to complete this activity between July 1, 2016 and September 30, 2016 to be 175 hours (1,050 applicants listening for 0.1667 hours). The annualized burden over two years and three months (i.e. the estimated extension period) is 78 hours.

  • Baseline survey for program applicants. We expect 12,600 applicants total during the study intake period. It is assumed that about 95 percent of applicants to the program will be found eligible for the study and consent to participate. Thus, 12,000 are expected to complete the baseline survey over the course of three years: 1,500 respondents in each of the eight sites. Each survey lasts approximately 0.5833 hours. This activity will end September 30, 2016 when enrollment is complete, thus we estimate the total burden to complete this activity between July 1, 2016 and September 30, 2016 to be 583 hours (1,000 study applicants taking the survey for 0.5833 hours). The annualized burden over two years and three months (i.e. the estimated extension period) is 259 hours.

  • Study MIS to conduct random assignment. This burden is based on the number of computer entries grantee site staff will make as they enroll and track participation by participants. We anticipate 12,600 program applicants will have one MIS entry to document intake and to conduct random assignment, producing 12,600 MIS entries. We estimate that 120 staff members—15 in each of 8 sites—will collect MIS data on these participants. Therefore, each staff member will make 105 entries over the course of 3 years (12,600 ÷ 120 = 105). We estimate that each entry to conduct random assignment will take 0.1667 hours. This activity will end September 30, 2016 when enrollment is completed, thus we estimate the total burden to complete this activity between July 1, 2016 and September 30, 2016 to be 180 hours. The annualized burden over two years and three months (i.e. the estimated extension period) is 80 hours.

  • Protocol for collecting administrative records. We expect 32 staff members–4 in each of the eight sites–to complete the state and county administrative records protocol. We expect each response to complete the administrative records protocol to last eight hours and that two responses per staff member will be required. We estimate that the total burden to complete this activity between July 1, 2016 and September 30, 2018 is 256 hours (one additional response per staff member). The annualized burden for grantee site staff over two years and three months (i.e. the estimated extension period) is 114 hours.

  • 12-month follow-up survey for program enrollees. 12,000 sample members are expected to complete the baseline survey (1,500 respondents in each of the eight sites). Twelve-month follow-up surveys will be attempted with approximately 8,000 of these sample members between October 1, 2014 and December 30, 2016. We anticipate an 80 percent response rate or 6,400 respondents. Each survey lasts approximately 0.75 hours. We estimate the total burden to complete this activity between July 1, 2016 and December 30, 2016 to be 1,107 hours. The annualized burden over two years and three months (i.e. the estimated extension period) is 492 hours.





Combined Total Burden

Table A.4 summarizes the total estimated reporting burden and costs for activities continuing beyond the estimated ICR renewal date. If the current request is approved, 2,546 annual burden hours and an annualized cost of $58,946 would be approved for the CSPED study extension. These burden and cost estimates are a subset of what was included in our total estimates in the original OMB submission. We are only providing estimates for the portion of work that would be completed during the extension period.

Table A.4. Estimate of Burden and Cost for the CSPED Evaluation – TOTAL Burden Request for Activities Continuing Beyond the Expiration Date

Activity/Respondent

Total Number of Respondentsa

Number of Responses per Respondent

Average Burden Hours per Response

Total Burden Hours

Total Annual Burden Hoursa

Average Hourly Wage

Total Annualized Cost




Implementation and Cost Study

Staff Interview Topic Guide

Program staff and community partners

120

1

1



120

53

$27.86

$1,477


Study MIS









Staff tracking program participation

200

468.75

0.0333

3,125

1,390

$27.86

$38,725



Impact Study

Introductory script:








Grantee staff

120

9

0.1667

180

80

$27.86

$2,229


Program applicantsb

1,050

1

0.1667

175

78

$7.25

$5,665



Baseline survey

Program applicantsb

1,000

1

0.5833


583

259

$7.25

$1,878



Study MIS to conduct random assignment

Grantee staff

120

9

0.1667



180

80

$27.86

$2,229



Administrative Records Protocol









Staff completing protocol

32

1

8

256

114

$27.86

$3,176



12-month follow-up

Survey









Study participants

1,476

1

0.75

1,107

492

$7.25

$3,567


Total

1,829



5,726

2,546


$58,946

a Burden estimates are annualized over 2.25 years.

b Five percent of program applicants are not expected to agree to participate in the study; thus there are 5% more program applicants than study participants.

13. Estimates of Other Total Cost Burden to Respondents and Record Keepers

These information collection activities do not place any additional costs on respondents or record keepers.

14. Cost to the Federal Government

The total estimated cost for completion of the CSPED evaluation is $15,672,609 over six years. The cost over the two years and three months of the requested clearance extension is $5,224,203 and the annualized cost to the federal government is $2,321,868.

15. Explanation for Program Changes or Adjustments

We are requesting an extension for two years and three months to complete data collection for CSPED (OMB no. 0970-439), which currently expires September 30, 2016. We are asking to continue eight of the ten currently approved ICs during the extension period. No changes have been made to these 8 ICs. The other two ICs (Focus Group Guide and Program Staff Survey) are not included in our request because they will terminate prior to the extension period. The total annualized burden hours for this request are 2,546, which are considerably lower than the OMB approved annualized burden hours of 10,604 for the entire study. The reason for this change is that the annualized burden hours for this request are only calculated for the extension period.

Table A. 5 lists the total annual burden hours and total annualized cost for the entire CSPED study as currently approved by OMB to show how these have changed under the current request for an extension to complete data collection for this study, which only requires 8 of the 10 ICs currently approved by OMB.

Table A.5. Estimates of Annualized Burden Hours and Cost for the Entire CSPED Evaluation (Currently Approved) and for the Current Requested Extension


Activity

CURRENTLY APPROVED ESTIMATES (for entire study)

ESTIMATES For Requested Extension

Total Annual Burden Hoursa

Total

Annualized

Costa

Total Annual Burden Hoursb

Total Annualized Costb

Implementation and Cost Study

Staff Interview Topic Guide

80

$2,229

53

$1,477

Focus Group Guide

120

$870

0

0

Program Staff Survey

66.7

$1,858

0

0

Study MIS for tracking participation

3,333.3

$92,866

1,390

$38,725

Impact Study

Introductory Script

Grantee Staff

Program Applicants



700

700



$19,502

$5,075



80

78



$2,229

$5,665

Baseline Survey

2,333.3

$16,916

259

$1,878

Study MIS for random assignment

700

$19,502

80

$2,229

Administrative Records Protocol

170.7

$4,756

114

$3,176

12-Month Follow-up Survey

2,400

$17,400

492

$3,567

Total

10,604

$180,974

2,546

$58,946

a Burden estimates are annualized over 3 years. b Burden estimates are annualized over 2.25 years.

16. Plans for Tabulation and Publication and Project Time Schedule

a. Plans for Tabulation

Implementation and Cost Study

The instruments included in this OMB package for the implementation and cost study will yield data that will be analyzed using qualitative and quantitative methods to describe program implementation, assess the program’s overall quality, analyze the factors that appear to be linked to quality, and identify lessons for future practice. A thorough understanding of program implementation will provide context for interpreting program impacts, while a greater understanding of how programs can be implemented with high quality is expected to inform the next generation of programming.

The evaluation contractor will use standard qualitative procedures to analyze and summarize information from staff and partner interviews conducted using the semi-structured staff interview topic guide and the focus group guide. Analysis will involve organization, coding, triangulation, and theme identification. For each qualitative data collection activity, standardized templates will be used to organize and document the information and then code this documentation. Coded text will be searched to gauge consistency and triangulate across respondents and data sources. This process will reduce large volumes of qualitative data to a manageable number of topics/themes/categories (Yin 1994; Coffey et al. 1996) which can then be analyzed to address the study’s research questions.

To code the qualitative data for key subtopics and themes, the evaluation team will first develop a coding scheme that builds from the interview or focus group questions. Senior members of the evaluation team will refine the initial coding scheme by reviewing codes and a preliminary set of data output to make adjustments and ensure alignment with the topics that emerge from the data. For each round of coding, two to three project team members will be trained to code the data using a qualitative analysis software package, such as Atlas.ti or NVivo. To ensure reliability across coders, all team members will code an initial document and compare codes to identify and resolve discrepancies. As coding proceeds, the lead team member will review a sample of coded documents from each coder to monitor reliability. Coded data will enable the team to compare responses across respondents within and across grantees by searching on specific codes. The software will also allow the team to retrieve data on particular codes by type of respondent (for example, case manager or parenting services coordinator). To compare information, the evaluation team may retrieve data for subsets of programs, such as those using the same fatherhood curriculum or those located in rural areas.

Quantitative data will be summarized using basic descriptive methods. Sources of quantitative data include the program staff and community partner survey. Data from each source will follow a common set of steps involving data cleaning, variable construction, and computing descriptive statistics. To facilitate analysis of each data source we will create variables to address the study’s research questions. Construction of these analytic variables will vary depending on a variable’s purpose and the data source being used. Variables may combine several survey responses into a scale, aggregate attendance data from a set time period, or compare responses to identify a level of agreement.

Study MIS information will also be used for the implementation and cost study. The study will provide summary statistics for key program features:

  • Enrollment patterns. For example, the average number of new applicants each month.

  • Services provided by grantees. For example, the average number of group workshops offered each month, the average number of individual service contacts each month, or the percentage of individual service contacts provided in participants’ homes or in the office.

  • Participation patterns. For example, the number of participants that engage in a group activity within two months of enrollment and the average number of hours of group workshops received by program participants.

We will analyze data from the study MIS for each grantee at two points in time which correspond to the two implementation reports identified in Table A.6. In each report, we will describe enrollment patterns, services provided, and participation patterns over the previous 12 months. Later analyses may describe how patterns changed over time, such as from the early to late implementation period.

Impact study

Baseline and 12-month follow-up survey data will be used in the impact and implementation analyses. First, baseline survey data will be used to describe the characteristics of CSPED program participants. For each site, we will present tables of frequencies and means for key participant characteristics, including demographic and family information.

Baseline survey data will also be used in the impact analysis to test for baseline equivalence, define subgroups, improve the precision of impact estimates, and estimate factors that predict participation in the program. The goal of the impact analysis is to provide statistically valid and reliable estimates of the effects of CSPED on the outcomes of participants. To do so, we will compare observed outcomes for members of a randomly selected program group—individuals eligible for program services—with outcomes for members of a randomly selected control group that was not offered program services. We will use the experience of the control group as a measure of what would have happened to the program group members in the absence of CSPED. Random assignment of noncustodial parents to either a program (treatment) or a control group ensures that the two groups of noncustodial parents do not initially differ in any systematic way on any characteristic, observed or unobserved. Any observed differences in outcomes between the program and control group of noncustodial parents can therefore be attributed to the program with a known degree of precision.

Though random assignment ensures that noncustodial parents in the program and control groups do not initially differ in any systematic way, there might still be chance differences between groups. To confirm that there were no differences between the program and control groups before random assignment, we will statistically compare key characteristics—including outcome measures—between the groups at baseline. In particular, to establish baseline equivalence, we will conduct t-tests and F-tests to test for differences between the two groups both overall and separately by site. In these comparisons, we will use the analytic sample, which is composed of respondents to the follow-up survey.

Baseline and 12-month follow-up survey data will also be analyzed jointly to estimate impacts. Using baseline data in the impact analysis will improve the statistical precision of impact estimates. Differences of means or proportions in outcomes between the program and control group would provide unbiased estimates of the impacts of being offered participation in the CSPED program (referred to as the intent-to-treat effect, or ITT effect). However, the impact analysis will use regression models to estimate the ITT effect, allowing us to control for random differences in the baseline characteristics of program and control group members.

The information obtained through the collection of administrative records will be used to measure the impacts of the CSPED program, that is, the information on study participants’ court records, child support payments, unemployment benefits, and services received outside of what is offered in the CSPED program, will be used to demonstrate the contrasts between the CSPED program group (the treatment group) and the comparison (control) group. The information from administrative records, in combination with information from the staff semi-structured interviews and the study MIS, will be used to assess the relative benefits and costs of the CSPED program.

b. Time Schedule and Publications

This study is expected to be conducted over a six-year period beginning on October 1, 2012. This ICR is requesting an extension through September 30, 2018. Baseline data collection began in October 2013. Table A.6 provides a timeline for all the Information Collection Activities.

In addition to reports on the findings from the impact, implementation, and benefit-cost studies, CSPED will provide opportunities for analyzing and disseminating additional information through special topics reports and research or issue briefs. We will also provide a public or restricted use data file for others to replicate and extend our analysis.

Table A.6. Schedule for the CSPED Evaluation

Activity

Date

Intake period for impact study

October 2013-September 2016

Initial implementation report

September 2015

Final implementation report

September 2017

Impact and benefit-cost reports

September 2018

Short research or policy briefs are an effective and efficient way of disseminating study information and findings. The evaluation team will develop and produce several short research briefs. Topics for these briefs will emerge as the evaluation progresses but could, for example, summarize key implementation, impact, or subgroup findings or describe the study purpose and grantees.

17. Reason(s) Display of OMB Expiration Date Is Inappropriate

All instruments will display the expiration date for OMB approval.

18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.

REFERENCES

Alexandre, Pierre K., and Michael T. French. “Labor Supply of Poor Residents in Metropolitan Miami, Florida: The Role of Depression and the Co-Morbid Effects of Substance Use.” The Journal of Mental Health Policy and Economics, 4, 2001, pp. 161-173.

Berlin, Martha, Leyla Mohadjer, Joseph Waksberg, Andrew Kolstad, Irwin Kirsch, D. Rock, and Kentaro Yamamoto. 1992. An experiment in monetary incentives. Pp. 393-398 in Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Coffey, Amanda, Beverly L. Holbrook, and Paul Atkinson. “Qualitative Data Analysis: Technologies and Representations.” Sociological Research Online, vol. 1, no. 1, 1996. Available at http://www.socresonline.org.uk/index_by_issue.html.

Dion, Robin M., Sarah A. Avellar, and Elizabeth J. Clary. “The Building Strong Families Project: Implementation of Eight Programs to Strengthen Unmarried Parent Families.” Final report submitted to the U.S. Department of Health and Human Services, Office of Planning, Research and Evaluation, Administration for Children and Families. Washington, DC: May 2010.

Downey, G., and J.C. Coyne. “Children of Depressed Parents: An Integrative Review.” Psychological Bulletin, vol. 108, 1990, pp. 50–76.

Gelfand, D.M., and D.M. Teti. “The Effects of Maternal Depression on Children.” Clinical Psychology Reviews, vol. 10, 1990, pp. 329–353.

Heinrich, Carolyn J., Brett C. Burkhardy, and Hilary M. Shager. “Reducing Child Support Debt and Its Consequences; Can Forgiveness Benefit All?” Journal of Policy Analysis and Management, vol. 30, no. 4, 2011, pp. 755-774.

James, Jeannine M., and Richard Bolstein.1990 The effect of monetary incentives and follow-up mailings on the response rate and response quality in mail surveys. Public Opinion Quarterly 54:346-361.

Kroenke, Kurt, Robert L. Spitzer, and Janet B.W. Williams. “The PHQ-9: Validity of a Brief Depression Severity Measure.” Journal of General Internal Medicine, vol. 16, no. 9, 2001, pp. 606-613.

Kroenke, Kurt, Tara W. Strine, Robert L. Spitzer, Janet B.W. Williams, Joyce T. Berry, and Ali H. Mokdad. “The PHQ-8 as a Measure of Current Depression in the General Population.” Journal of Affective Disorders, vol. 144, no. 1, 2009, pp. 163–173.

Löwe, B., J. Unützer, C.M. Callahan, A.J. Perkins, and K. Kroenke, “Monitoring Depression Treatment Outcomes with the Patient Health Questionnaire-9.” Medical Care, vol. 42, no. 12, 2004, pp. 1194-1201.

Marcotte, Dave E., Virginia Wilcox-Gök, and D. Patrick Redmon. “Prevalence and Patterns of Major Depressive Disorder in the United States Labor Force.” The Journal of Mental Health Policy and Economics, 2, 1999, pp. 123-131.

Martin, A., W. Reif, A. Klaiberg, and E. Braehler, “Validity of the Brief Patient Health Questionnaire Mood Scale (PHQ-9) in the General Population.” General Hospital Psychiatry, vol. 28, no. 1, 2006, pp. 71-77.

Manning, W.D., and P.J. Smock. “‘Swapping’ Families: Serial Parenting and Economic Support for Children.” Journal of Marriage and Family, vol. 62, 2000, pp. 111–122.

McLanahan, Sara. “Fragile Families and the Reproduction of Poverty.” Annals of the American Academy of Political and Social Science. vol. 621, no. 1, 2009, pp. 111–131.

Meyer, Daniel R. and Maria Cancian (2002). “Volume 1: Comparative Summary of Quantitative Nonexperimental and Experimental Analyses.” Institute on Research on Poverty, University of Wisconsin-Madison.

Pearson, Jessica, Lanae Davis, and Jane Venohr. “Parents to Work!” Denver: Center for Policy Research, February 2011.

Pinto-Meza A., A. Serrano-Blanco, M.T. Peñarrubia, E. Blanco, and J.M. Haro, “Assessing Depression in Primary Care with the PHQ-9: Can It Be Carried Out over the Telephone?” Journal of General Internal Medicine, vol. 20, no. 9, 2005, pp. 738-742.

Singer, Eleanor and Richard A. Kulka. “Paying respondents for survey participation,” in Studies of Welfare Populations: Data Collection and Research Issues, eds. Michele Ver Ploeg, Robert A. Moffitt and Constance F. Citro, National Academy Press, Washington, D.C., 2002, pp. 105-127.

Singer, Eleanor, John Van Hoewyk, and Mary P. Maher. “Does the Payment of Incentives Create Expectation Effects?” Public Opinion Quarterly. vol. 62, 1998, pp. 152–164.

Straus, M. A, Sherry L. Hamby, and W. Louise Warren. “The Conflict Tactics Scales Handbook: Revised Conflict Tactics Scales (CTS2), CTS: Parent-Child Versoin (CTSPC)”. Western Psychological Services Publishers, 2003.

U.S. Census Bureau. “Family Structure and Children’s Living Arrangements.” Available at http://www.childstats.gov/americaschildren/famsoc.asp. Accessed March 12, 2013.

Wood, Robert G., Sheena McConnell, Quinn Moore, Andrew Clarkwest, and JoAnn Hsueh. “Strengthening Unmarried Parents’ Relationships: The Early Impacts of Building Strong Families.” Princeton, NJ: Mathematica Policy Research, May 2010.

Wood, Robert G., Quinn Moore, and Andrew Clarkwest. “BSF’s Effects on Couples Who Attended Group Relationship Skills Sessions: A Special Analysis of 15-Month Data.” Princeton, NJ: Mathematica Policy Research, May 2011.

Yin, R. Case study research: Design and methods (2nd ed.). Thousand Oaks, CA: Sage Publishing, 1994.

1 We estimate that there will be 750 participants in the CSPED program at each of the eight sites, resulting in 6,000 participants in the CSPED program.

2 Based on the Building Strong Families Study (Wood et al. 2010), our current experience, and the speed at which data can be entered, we expect each entry to take about two minutes, or 1/30 of an hour (which is rounded to 0.0333 hours).

3 Five percent of program applicants are not expected to agree to participate in the study; thus there are 5% more program applicants than study participants.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Title40174 CSPED OMB
SubjectOMB Package Part A
AuthorJLugo-Gil
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy