1205-0494 EVAL YPDP EVAL Reinstate w change - Supporting Statement Part B - 09142015

1205-0494 EVAL YPDP EVAL Reinstate w change - Supporting Statement Part B - 09142015.docx

Young Parents Demonstration Project Evaluation

OMB: 1205-0494

Document [docx]
Download: docx | pdf


SUPPORTING STATEMENT

EVALUATION of the YOUNG PARENTS DEMONSTRATION PROGRAM, REINSTATEMENT WITH CHANGE

1205-0494


B. Collection of Information Employing Statistical Methods


1. Respondent Universe and Sampling Methods


Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.

We will attempt to gather data on the entire universe of the target population in this study, with the goal of being able to generalize to this population.  Therefore, we will follow rigorous procedures to obtain high response rates and assess potential nonresponse bias in our achieved completed sample to help inform our assessment of how well we have represented the universe and how reliable the results from our achieved sample are with respect to the universe.  The universe for the implementation site visits is the four organizations awarded grants for the Young Parents Demonstration Program (YPDP).  The universe for the Participant Tracking System (PTS) is the 1,633 applicants eligible for YPDP services at those grantees.  The universe for the 18-month follow-up survey is the same 1,633 individuals entered into the PTS.  No sampling methods are being used.  The universe for the final analysis is the 1,306 respondents to the follow-up surveys. The universe for the implementation site visits is the four organizations awarded YPDP grants.



Number in Potential Universe

Estimated Number in Final Universe

Percent

YPDP Grantees

4

4

100%

Eligible Applicants

1,633

1,633

100%

PTS -- Randomly Assigned

1,633

1,633

100%

18-month Follow-up Survey

1,633

1,306

80%




2. Describe the procedures for the collection of information including:


* Statistical methodology for stratification and sample selection,

* Estimation procedure,

* Degree of accuracy needed for the purpose described in the justification,

* Unusual problems requiring specialized sampling procedures, and

* Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

a. Statistical methodology for stratification and sample selection


No sampling or stratification is being used in any of the three data collection activities that are the focus of this submission.


b. Estimation procedures


Because the evaluation includes the universe of grantees (i.e., during site visits) and survey respondents, estimation procedures will be straightforward regressions and t-tests of the difference in mean outcomes between the treatment and control groups. The primary statistical approach that will be used for the impact analysis is regression analysis.1 Regression analysis allows us to estimate the impact of the treatment (i.e., mentoring services provided by YPDP grantees) on the outcomes of interest while holding constant all other relevant observed variables. In mathematical terms, regression analysis permits us to estimate the equation


Yi = β 0+ β 1X1i + β 2X2i + β 3X3i …..+ β zZi + εi


where Yi is the outcome of interest for person i (for example, post-program employment or earnings); the X variables represent personal characteristics thought to potentially influence the outcome, such as gender, race/ethnicity, and education, the β terms (the regression coefficients) indicate the effect that the explanatory variables have on the outcome, and εi is a random error term with mean of zero for observation i. The variable of primary interest is Z, which is a dichotomous variable set equal to 1 for those who participate in the program (i.e., treatment group members) and 0 for others (i.e., control group members).


The main focus of the analysis effort will be on determining net impacts of the treatment on employment and earnings of YPDP participants at the individual site level, and if appropriate, for the pooled sample across the four YPDP sites. The treatment being provided under the experimental design (i.e., mentoring) is incremental – that is, an additional service that is being provided on top of existing employment and training services being provided to the control group. Both the treatment and control groups are receiving the existing employment and training services under the demonstration.


In addition to the employment and earnings outcomes that are the central focus of the impact study, there are also additional outcomes that are being collected as part of the survey and PTS that will be explored (e.g., education outcomes and parenting outcomes). To the extent possible, exploratory analyses will be conducted on these additional outcome variables using the methods described above (though employment and earnings will be the key outcomes of interest). To the extent feasible, subgroup analyses will be conducted using participant characteristics collected at the time of random assignment. The ability to conduct such subgroup analyses (for example, analyzing net impacts of the intervention on earnings for females versus males) will be dependent upon ability to pool sample across the four sites (i.e., with sample sizes of 400 for individual sites, it is not anticipated that it will be possible to conduct subgroup analyses and obtain statistically significant effects at the individual site level, particularly because of the incremental nature of the intervention).




c. Degree of accuracy needed for the purpose described in the justification


Although inferences will be made to the respondent population only, it’s useful to consider the statistical power of the estimates as if they were based on sampling. The concept of minimum detectable effect (MDE) is a practical way to summarize the statistical power of a particular evaluation design.2 Orr (1999) describes the MDE as “the smallest true impact that would be found to be statistically significantly different from zero at a specified level of significance with specified power.” For a binary outcome, such as employment in the post-program period, the formula for the MDE is:


MDE = Z(π*(1-π)).5((1-R2)/(nP(1-P))).5, where


Z = a multiplier which converts the standard error of an impact estimator to its corresponding minimum detectable effect,

π = the proportion of the study population with a successful outcome,

R2 = the explanatory power of the impact regression,

P = the proportion of sample members randomly assigned to the program group, and

n = the total number of sample members.


The formula is similar for a continuous outcome such as earnings:


MDE = Zσ((1-R2)/((nP(1-P)).5, where σ is the standard deviation of the outcome (e.g., earnings)


For the YPDP evaluation, if we pool all four sites, we assume that we will have 1,306 observations (80% of 1,633 total possible observations), or 320 observations (80% of 400 total possible) for individual sites.3 Additionally, we assume that we want to calculate the minimum detectable effect for a two-sided test with 80 percent power and a .05 significance level. Further, we compute the MDE for earnings, a continuous variable, and employment, a dichotomous variable. We assume a standard deviation for earnings of $4,899 based on data from the National Job Corps evaluation. For employment, we conservatively estimate that the mean outcome is .50. For earnings, we further assume that the R2 for the regression of earnings on individual characteristics is .20, which is consistent with the estimates from earnings regressions from the National Job Corps study. Finally, we assume that 50 percent of the sample is assigned to the treatment group. The table below shows the MDE under these assumptions:



Sample Size

(adjusted for nonresponse)

Earnings

Employment

Per Site

320

$1,372

.16

Pooled sites

1,306

$679

.08


For the pooled analyses, the minimum detectable effects are small enough to provide statistically significant impact estimates if the programs have a reasonable impact on earnings and employment.




d. Unusual problems requiring specialized sampling procedures


There are no unusual problems requiring specialized sampling procedures. The three data collection efforts will collect data on the universe of grantees (i.e., site visits) or YPDP participants (i.e., PTS and follow-up surveys).




    1. Any use of periodic (less frequent than annual) data collection cycles to

reduce burden


The implementation/process study site visits occur at two times, approximately one year after each YPDP project begins random assignment and towards the end of the grants. Staff and administrators in each of the four grantee programs enter data into the PTS beginning at the start of random assignment and continuing for approximately 18 months. All YPDP sites enter initial information needed to conduct the random assignment, and grantees use the PTS for ongoing program data entry to record service receipt and employment outcomes at 6, 12, and 18 months after random assignment. One purpose of the PTS is to reduce the burden of executing the grant provisions, including random assignment and service documentation, by providing grantees with a web-based electronic participant information system they can use, particularly for those without a pre-existing management information system. The 18-month follow-up survey is being administered only once.




  1. Describe methods to maximize response rates and to deal with issues of nonresponse. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.


All four YPDP sites have agreed to fully participate in the evaluation; such agreement ensures 100-percent response rate for the field-based implementation/process interviews during the site visits. Likewise, all grantees have agreed to enter participant data into the PTS to facilitate random assignment of all YPDP participants. Care has been taken, and the approach, approved by the Urban Institute’s Institutional Review Board, to accurately and simply explain the study to potential participants, which should minimize refusal rates and maximize voluntary participation in the YPDP. However, those who refused are not being included in the study universe.


Techniques to Maximize Response Rate to Follow-up Survey. To achieve a high response rate on the follow-up survey, the evaluation contractor anticipates the need for multiple modes to contact respondents for the survey: telephone, mail, and in-person. In addition, participants will receive an incentive payment in the form of a $25 gift card if they complete the survey. The survey administration uses strategies that will maximize telephone interviews, minimize field data collection costs, and avoid issues that can arise with mixed-mode data collection. An overview of the procedures for conducting the follow-up survey is discussed below.


  • Collect contact information as part of the enrollment process for each YPDP participant. At the time of intake into YPDP, participants were asked to provide contact information (i.e., telephone, e-mail, and address) for themselves, as well as three additional contacts. This information was entered into the PTS. YPDP grantee staff can update this contact information at any time.


  • Mail an advance pre-notification letter to the updated or last known address for the study subjects. The evaluation team is sending out the advance pre-notification letter for the follow-up survey to remind the individual about the YPDP study and provide background information about the survey effort. This pre-notification letter makes YPDP participants aware of the $25 gift card for completing the survey. Envelopes are marked “Address Service Requested,” to ensure that the updated address is forwarded while still notifying the sender, Abt Associates, Inc., of the updated address. When letters are returned as undeliverable, Abt Associates staff contacts YPDP grantees to determine if there has been a recent change of address, which has not been recorded in the PTS, and/or to obtain additional guidance from grantee staff on the best possible way to contact the individual. To improve the contact information and help support a strong response rate, ETA requests the following changes to the pre-notification process be approved:


    • Revise sequencing of pre-notification of the survey. In order to increase awareness of the survey to the study participants and increase the reliability of the contact information, Abt-SRBI, the survey firm, is sending out two communications in writing to the study participants prior to reaching 18 months after random assignment when the survey is fielded. The first communication is a pre-notification packet reminds the YPDP study participants about the research, provides background information, and informs them of the telephone survey effort and the gift card incentive payment for completing the survey. The packet also contains a card and postage-paid envelope to mail their updated contact information. An email address and toll-free number are also provided. The card instructs the participant to either check a box if all of the information is correct or make changes on the card and then return the card in the self-addressed stamped envelope. A pre-incentive in the form of a $2 gift card was initially tested to encourage a higher response to providing updated contact information and a modest increase in response was observed (discussed below). A second pre-notification letter is sent out two weeks prior to initiating calls to participants to field the survey to ensure they know when they will receive a call to take the survey. Both letters use the language from the original pre-notification letter but are revised to sequence obtaining updated contact information or making study participants aware of the upcoming survey call. The revised pre-notification letters are provided in Attachment G.


    • Add a pre-incentive ($2 gift card) to the pre-notification packet sent to all potential survey respondents. As mentioned, Abt-SRBI sends out a letter to study participants making them aware of the survey and asking them to update their contact information. After the first four cohorts of the survey were conducted, Abt-SRBI found that response rates to the pre-notification letter (and the survey) were lower than anticipated and hypothesized that the addition of a $2 gift card in the pre-notification packet might motivate respondents to return the contact information card, and ultimately, could substantially boost the survey response rate. Abt-SRBI conducted a pilot-test of the $2 pre-incentive over the next several cohorts of respondents. This pilot-test indicated that while the pre-incentive had no effect on the number of contact information cards returned by respondents, there was an increase in response rate to the survey of about 5 percentage points for the cohorts receiving the pre-incentive. The overall response rate to the survey increased from 58 percent for the earlier cohorts (without the pre-incentive) to 63 percent for the cohorts receiving the pre-incentive. It is possible that the boost in overall response rate could be attributable to factors other than the pre-incentive (because of the relatively small sample that participated in the pilot-test and the pre/post method used in the pilot-test). Due to the accompanying boost to the overall survey response rate, Abt-SRBI is requesting that the pre-incentive be continued for subsequent cohorts.


  • Commence up to 15 calls per sample individual and conduct telephone interviews using Computer Assisted Telephone Interviewing (CATI). Survey staff will initiate telephone calls to YPDP participants during the week of the 18-month anniversaries of the participant’s random assignment. Both treatment and control group members will be surveyed by telephone. Utilizing the most recent contact information available, the survey team will commence telephone data collection using a 15-calls per each participant calling design. Despite having recent contact information in the PTS, it is anticipated that it will be necessary to locate some respondents who may have moved or changed phones since their last contact. If the most recent telephone number in the PTS is incorrect or the survey team cannot reach the individual through the contact information provided, a team member will contact the program for any updates that might not have been entered into the PTS and other valuable information that YPDP staff might have on locating the participant. The survey team will also call the alternative contacts listed in the PTS (which the participant provided as part of the intake process). As each participant is successfully contacted, the survey will be conducted using CATI, which collects responses 100 percent electronically.


  • In-Person Survey Recruitment and Administration. The survey team conducts in-person field follow-up in the YPDP grantee sites on cases where a final determination has been made of incorrect telephone numbers and/or after the survey team has not been able to reach the respondent or confirmed that the number is correct. An Abt-SRBI Field Team utilizes Field Management System software to manage the field tracking and follow-up effort. Field staff locates the respondent in the field, and once a respondent is located and agrees to participate, the respondent uses a cell phone, provided by survey field staff, to call into the Abt-SRBI telephone center to take the survey with a trained interviewer. Once the interview is completed, the locator hands the $25 gift card to the respondent.4 To help with scheduling, respondents can also complete the survey on-site at a grantee’s location based on their preference. To implement on-site days at YPDP grantee sites, Abt-SRBI interviewer-locators coordinate with site staff to schedule dates and times when private rooms are available for multiple participants to complete the survey on-site. Abt-SRBI interviewer-locators will be present to greet participants, answer any questions about the survey, initiate the survey with the Abt-SRBI phone center, and provide the gift card upon completion of the interview.


The CATI program records all refusals and interview terminations in a permanent file, including the nature, reason, time, circumstances, and the interviewer. This information is reviewed on an ongoing basis to identify any problems with the contact script, interviewing procedures, questionnaire items, etc. Also, the refusal rate by interviewer is closely monitored. Using these analyses, a “Conversion Script” has been developed. This script will provide interviewers with responses to the more common reasons given by persons for not wanting to participate in the survey. The responses are designed to allay concerns or problems expressed by the telephone contacts. (See Attachment H for Conversation Script.)


Abt-SRBI implements its refusal conversion plan in which each person selected for the sample who refuses to participate will be re-contacted by the contractor approximately one-to-two weeks following the refusal. The contractor uses the Conversion Script in an attempt to convince the individual to reconsider and participate in the survey. Only the most experienced and skilled interviewers will conduct the refusal conversions. Exceptions to refusal conversion are allowed on an individual basis if for some reason the refusal conversion effort is deemed inappropriate.


There is maintenance and regular review of outcome data in the reporting file so that patterns and problems in both response rate and production rates can be detected and analyzed. Meetings will be held with the interviewing supervisory staff and the study management staff to discuss problems with contact and interviewing procedures and to share methods of successful persuasion and conversion.


Nonresponse Analysis for the Follow-up Survey. The actual difference between respondents and nonrespondents on estimates will not be known. In this instance, nonresponse bias is typically explored using indirect measures. We will complete nonresponse analysis using various demographic characteristics collected as part of the PTS; if the response rate is 80 percent or higher, we do not expect the nonresponse analysis to be a key part of the analysis. This comparison of the characteristics of the participants completing follow-up surveys versus nonrespondents to the survey will be conducted to determine whether there is any evidence of significant nonresponse bias in those completing the follow-up surveys. Analysis of the characteristics of respondents and nonrespondents to the survey should identify whether there is any evidence of significant nonresponse bias for key characteristics that could affect outcomes: age, gender, ethnicity, race, employment status prior to program participation, highest school grade completed, and services received. This analysis will suggest whether any weighting or other statistical adjustment needs to be made to correct for nonresponse bias in the completed sample.5


We have tracked response rates for the follow-up survey by site, as the survey is being administered on a rolling basis (at 18 months after random assignment). During data collection, if we find that in some strata response rates are low, every effort will be made to increase the response rates in those strata to reduce the nonresponse bias in overall estimates. We plan to monitor the types of nonrespondents like refusals along with reasons for refusal, unable to contact, unable to respond etc. The type of nonresponse and reasons for nonresponse might help in nonresponse bias analysis. The analysis will be conducted according to the Office of Management and Budget guidelines.


The size of the nonresponse bias in the sample respondent mean of a characteristic of interest is a function of the nonresponse rate and the difference between the respondent and nonrespondent population means. An estimate of the bias in sample mean based only on the respondents is given by



where is the mean based only on respondents, is the mean based on nonrespondents, is the sample size and is number of nonrespondents.


We plan to make comparisons between respondents and nonrespondents available for each of the four YPDP sites. For example, the comparison of respondents and nonrespondents by gender will show whether proportionately more male YPDP participants are responding to the follow-up survey than female participants. We will also look at some characteristics such as age and race for both respondents and nonrespondents. If there are substantial differences in response rates for race and/or ethnicity groups, then we will examine survey data to see whether there are differences in survey responses between respondents in different race/ethnicity groups or different age groups. If there are differences between these groups, a post-stratification adjustment by race or age within each stratum may reduce bias due to nonresponse assuming that within these groups respondents are similar to nonrespondents. Depending on sample sizes in these groups, we may use post-stratification adjustment within strata. Variance estimation will then be done using the post-stratification option.


To perform the post-stratification adjustment, we will use data on gender, race/ethnicity and age, which is available for each member of the eligible population from which the sample has been selected from the PTS. As a first step, variables and categories of these variables that will be used in post-stratification adjustment will be determined. Cells for doing post-stratification adjustment will be determined by the cross-classification of these categories. For example, if we have 4 categories of race/ethnicity, 3 age groups, and 2 gender groups, there will 24 post-stratification cells. The actual number of cells will be determined based on the number of respondents in each of these cells in the sample. Some cells may be collapsed if the number of respondents is too small or zero.


The number of persons in the population in each post stratification cell will be determined. This is the control total for adjusting the sampling weights of the respondents. Since the sample is not a probability sample, the initial sampling weight of each respondent in each cell will be 1.0. The post-stratification weight that will be assigned to a respondent in a cell will be the ratio of the number of persons in the population in that cell to the number of persons in the sample in that cell. This means that the sum of the weights will equal the population total in that cell. This adjustment ensures that the weighted proportions of the sample by age, race/ethnicity and gender are the same as those in the population. This weight will be used for producing population-based estimates.


Missing Data. Throughout the project, we have conducted quality control procedures with grantees on the data they enter into the PTS data to both ensure the accuracy and completeness of the data. However, there may be data missing from the PTS and from the survey if information is not available. Before conducting planned analyses, we will examine record completeness. Missing data can be a concern in any analysis because simply dropping observations with missing data can lead to a loss of statistical power and potentially bias parameter estimates. For cases with missing values on important control covariates such as gender, race/ethnicity, marital status, we will examine the number of observations with missing data and will conduct analyses to assess whether the missing data appear to be missing at random, or if we observe systematic differences in the cases with missing v. nonmissing values. If the data appear to be missing at random, we will either use listwise deletion or an imputation strategy such as multiple imputation to fill in missing values. Listwise deletion will be used if there are few cases with missing data and dropping those cases will have minimal impact on statistical power. Multiple imputation or a similar imputation strategy will be used to impute control variables if we anticipate a substantial loss of statistical power due to a higher rate of missing data. We do not plan to impute key measures such as dependent variables, YPDP participation, or service receipt. If the missing data do not appear to be missing at random, we will document the patterns of missing data we observe and will present several sensitivity analyses to assess the extent to which missing values may bias impact estimates.




  1. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.


A pretest of the follow-up survey was conducted of nine YPDP participants, to ensure the CATI script and online version are functioning properly and the data are being collected accurately. The pretest consisted of the entire survey process from sample management to tabulation of results. Any problems encountered during the pretest of the questionnaire were resolved before the survey was put into the field.




  1. Name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.



Name and telephone number of individuals consulted on statistical aspects of the design:


Stephen Bell, Ph.D.

Abt Fellow and a Principal Associate/Scientist

Social and Economic Policy Division

Abt Associates

4550 Montgomery Avenue, Suite  800

Bethesda, MD 20815

[email protected]



Carolyn J. Heinrich

Sid Richardson Professor of Public Affairs and affiliated Professor of Economics

Director, Center for Health and Social Policy

Lyndon B. Johnson School of Public Affairs

The University of Texas at Austin, P.O. Box Y

Austin, TX  78713-8925

[email protected]



Renée Spencer, Ed.D., LICSW

Associate Professor

Chair, Human Behavior Department Coordinator, SWRnet

Boston University School of Social Work

264 Bay State Road

Boston, MA 02215


[email protected]




The agency responsible for receiving and approving contract deliverables is:


Employment and Training Administration

U.S. Department of Labor

Frances Perkins Building
200 Constitution Avenue, NW

Washington, DC 20210

Person Responsible: Michelle Ennis, Contracting Officer’s Technical Representative

(202) 693-3636

[email protected]








All data collection and analysis will be conducted jointly by:


Capital Research Corporation, Inc.

1910 N. Stafford Street

Arlington, VA 22207

Person Responsible: John Trutko, Project Director

(703) 522-0885

[email protected]




The Urban Institute

2100 M Street, NW

Washington, DC 20037

Person Responsible: Lauren Eyster, Co-Principal Investigator

(202) 261-5621

[email protected]




Abt. Associates, Inc.

4550 Montgomery Ave # 800N

Bethesda, MD 20814-3343

Person Responsible: Karin Martinson, Co-Principal Investigator

(301)347-5726

[email protected]




George Washington University

Trachtenberg School of Public Policy and Public Administration

805 21st St NW, Washington, DC 20052

Person Responsible: Burt Barnow, Co-Principal Investigator

(202) 994-6379

[email protected]





1When the outcome variable is not continuous other statistical techniques, such as logit and probit analysis, can be used to provide estimates of the relationship. Although these approaches often provide better estimates of relationships, the equations are more difficult to interpret. When appropriate, we will use these more sophisticated techniques as well as the easier to interpret ordinary least squares regression analyses.


2 Howard S. Bloom (1995). “Minimum Detectable Effects: A Simple Way to Report the Statistical Power of Experimental Designs.” Evaluation Review, Vol. 19:5, 547-556. See also Larry L. Orr (1999). Social Experiments: Evaluating Public Programs with Experimental Methods. Thousand Oaks, CA: SAGE Publications.

3 The estimate for individual sites is based on 400 planned enrollments in three of the four YPDP sites; the fourth site has a slightly higher enrollment goal (433 enrollments), which yields only a marginally different MDE.

4Abt Associates has used this method successfully with similar populations. For instance, Abt Associates is currently using this method to maximize response rates in Cook County, Illinois, as part of a survey being conducted in a study to measure the effects of child care subsidies for low-income families.


5 The use of indirect measures such as demographics to conduct nonresponse analysis is supported in the literature. See O’Neil, G. and J. Dixon (2005). Nonresponse bias in the American time Use Survey. ASA Section in Survey Research Methods (p2958-2966). [www.bls.gov/tus/papersandpubs.htm]; Groves, R.M. (2006). Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opinion Quarterly, 70, 646-675.; and Kasprzyk, D and Geisbreecht (2003). Reporting Sources of Error in U.S. Government Surveys. Journal of Official Statistics, 19(4), pp 343-363.

B-12

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT 1995
AuthorAdministrator
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy