UBEval OMB Part A.Final.Regular1

UBEval OMB Part A.Final.Regular1.doc

Impact Evaluation of Upward Bound's Increased Focus on Higher-Risk Students - Baseline Data Collection Protocols

OMB: 1850-0822

Document [doc]
Download: doc | pdf










Cambridge, MA

Lexington, MA

Hadley, MA

Bethesda, MD

Chicago, IL

Abt Associates Inc.

55 Wheeler Street

Cambridge, MA 02138

Supporting Statement for Paperwork Reduction Act Submission to OMB: Part A



Impact Evaluation of Upward Bound’s Increased Focus on Higher-Risk Students









December 15, 2006

Revised February 15, 2007 and

May 4, 2007


Prepared for

National Center for Education Evaluation and Regional Assistance

U.S. Department of Education

555 New Jersey Ave., NW

Washington, D.C 20208


Project Officer:

Jonathan Jacobson


Prepared by

Ryoko Yamaguchi, Alan Werner

Abt Associates Inc.

55 Wheeler St.

Cambridge, MA 02138


Project Director:

Alan Werner

Contents







Introduction and Overview of the Study

This submission is for regular clearance of the same baseline data collection protocols that OMB has already approved under emergency clearance for the Impact Evaluation of Upward Bound’s Increased Focus on Higher-Risk Students. This evaluation, which the Department of Education (ED) is conducting at the urging of OMB and in coordination with the 2007 grant competition, is focusing on the impacts of Upward Bound on students applying to enter the program as early as the summer of 2007. Since the emergency clearance requested by ED and approved by OMB will expire on 9/30/2007, and baseline data collection and random assignment could continue past then for some grantees, ED is submitting the same plans for an additional 60-day public comment period in order to obtain approval for baseline data collection over a longer period of time. The specific data collection activities included in this submission would continue over the first year of a multi-year evaluation and include use of Parental Informed Consent and Student Assent Forms, a Baseline Information Form, and a Student Selection Form. Part A describes the justification for the baseline data collection. Part B, submitted under separate cover, describes the statistical methods for selecting the grantees and students to be included in the evaluation.


The Upward Bound Program, initiated under Title IV of the Higher Education Act of 1965, is a Federal pre-college program designed to help economically disadvantaged students prepare for, enter, and succeed in college. It is the oldest and largest of the TRIO programs (http://www.ed.gov/
about/offices/list/ope/trio/index.html
), all of which share the objective of helping disadvantaged students achieve success at the postsecondary level. Funding for 800 Upward Bound grantees equaled approximately $278 million in Fiscal Year (FY) 2006.


Upward Bound grants are designed to improve college access and completion by assisting high school students aged 13–19 years old who are low-income (family income under 150 percent of poverty) or potential first-generation college students (neither parent with a bachelor's degree). In each project, however, two-thirds of the participants must be both low-income and first-generation college students. In FY 2005, about 61,000 low-income or potential first-generation college students participated in about 800 Upward Bound projects around the country.1


Students usually enter the program while in the ninth or tenth grade. Although students may participate in Upward Bound through the summer following twelfth grade (for three to four years total), participants spend an average of 21 months in the program. Upward Bound projects are generally operated by two- or four-year colleges. Projects offer extensive academic instruction as well as college counseling, mentoring, and other support services. In addition to regularly scheduled meetings throughout the school year, projects also offer an intensive instructional program that meets daily for about six weeks during the summer.2 The annual average cost per participant is about $4,500, which is approximately ten times more per participant than for another TRIO program, Talent Search, which also promotes college preparation by low-income, first-generation secondary school students.


Upward Bound was one of the first education programs evaluated through a large-scale, multi-site, random assignment evaluation. This evaluation, which began in 1991, was conducted using a sample of 67 Upward Bound grantees. Approximately 2,800 eligible students applying to Upward Bound programs went through random assignment between 1992 and 1994, with 1,500 assigned to the treatment (Upward Bound) and 1,300 to a control group. A series of reports have been released describing findings from the evaluation. The most recent (Myers et al., 2004, on-line at http://www.ed.gov/rschstat/eval/highered/upward/upward-3rd-report.html) found no statistically significant evidence of impacts of Upward Bound on overall postsecondary attendance rates or college credits earned. At the same time, the evaluation found evidence that certain subgroups of students with lower college expectations experienced positive impacts from Upward Bound.


Pointing to the findings from the recent evaluation, OMB has rated the Upward Bound program “ineffective” and indicated a need for the program to be better targeted on higher-risk students (http://www.whitehouse.gov/omb/expectmore/summary.10000210.2005.html). Responding to OMB’s assessment, ED’s Office of Postsecondary Education (OPE) has introduced significant changes to Upward Bound for the FY 2007 grant competition. To be funded in 2007, Upward Bound grantees must serve new students starting in Grade 9 or 10, and 30 percent of the new students served must be higher-risk 9th graders, as indicated by low GPA, low Grade 8 test scores, or failure to take pre-algebra or higher in Grade 8 and algebra or higher in Grade 9 (http://www.ed.gov/legislation/FedRegister/finrule/2006-3/092206b.html ). These changes reflect OPE’s interest in focusing the relatively resource-intensive Upward Bound program (the most expensive TRIO program) on eligible students with greater academic needs and who are most likely to benefit from the program. In addition to changing program priorities to include more high-risk students in Upward Bound, OPE has also included a requirement for grantees to cooperate with a new impact evaluation of the redesigned program by facilitating data collection if asked and by recruiting enough eligible students to permit the creation of a control group similar in size to the number of new students who would be served by each grantee for the 2007-2008 year. This evaluation is permitted under P.L. 105-244, Sec. 404A of the Higher Education Act of 1965, 1998 Higher Education Act Amendments, Sec. 402H. 20 U.S.C. 1070a-18, “Evaluations and Grants for Project Improvement and Dissemination Partnership Projects.”3


The impact evaluation supported by the data collection will be a randomized controlled trial involving a representative sample of FY 2007 Upward Bound grantees and students. In addition to gathering information on student characteristics and outcomes, the evaluation will gather information on program implementation using surveys of grantee project directors and case studies of specific grantees. The study will be conducted by Abt Associates Inc. and its subcontractors the Urban Institute and Berkeley Policy Associates, under a five-year contract with ED’s Institute of Education Sciences (IES), National Center for Education Evaluation. The study has the following major features:


  • A random sample of approximately 90–100 Upward Bound grantees, representative of the universe of Upward Programs in the continental U.S.;

  • Two simultaneous random assignment processes, one for higher-risk students and one for other students applying for Upward Bound;

  • A combined treatment and control group of 3,600 students, or about 20 treatment group students and 20 control group students for each sampled grantee;

  • A follow-up survey and collection of student records at about 24 months after random assignment to assess the impact of Upward Bound on high school-level outcomes of interest;

  • A grantee survey to collect information about grantee practices and services;

  • A series of 20 grantee case studies to investigate the relationship between grantee practices and services and impacts on students.


The timing of FY2007 Upward Bound grant announcements, grantee sampling, data collections, case study site visits and major reports is presented below. Longer-term follow-up may be implemented under a subsequent evaluation contract.


Timing of Major Project Activities and Reports

Activity or Report

Timing

Grantee samplinga

Winter, 2007

FY2007 grant announcements

Spring, 2007

Baseline student data collection and random assignment

Winter 2007 – Winter 2008

Grantee Survey

Fall, 2008

Follow-up Student Survey

Spring – Fall, 2009

Impact Report

Summer, 2010

Site visits for case studies

Fall – Winter, 2010/2011

Case Study Report

Summer 2011

a Note that incumbent grantee applicants will be sampled before 2007 grants are announced (in spring 2007) in order to give grantees sufficient time to recruit the eligible students needed for the evaluation. It is expected that most applicants who are current Upward Bound grantees will be renewed. Incumbent grantees will be oversampled to account for the small number of application rejections expected. Non-incumbent grantees will be excluded from the evaluation.


A. Justification

A.1. Circumstances Requiring the Collection of Data

This forms clearance submission covers data collection activities to take place in the first stages of the study and includes: (1) parental informed consent and student assent forms; (2) a baseline information form to be completed prior to random assignment by students eligible for Upward Bound; and (3) a student selection form completed by Upward Bound staff prior to random assignment. Further requests will be submitted for approval of future data collections.


While Upward Bound has been evaluated in the past, the new evaluation of Upward Bound will make important contributions in two major areas. First, the new program priorities should increase the enrollment of more academically at-risk students, changing the composition of Upward Bound programs and perhaps changing the type and mix of services provided. The new study is designed to assess program impacts both for Upward Bound students overall, as well as for higher-risk students as defined specifically in the new initiative.


The second major area in which the proposed study will make a contribution to knowledge about Upward Bound has to do with how Upward Bound programs are organized and the various services they provide, as well as with the relationship of program organization and practices to program impacts. The new study will look “inside the black box” to uncover the types of services and other discretionary program features that are associated with positive impacts on students. The statistical relationships uncovered will be used to target and design a number of program case studies that will examine qualitatively the nature of the relationship between program practices and program impacts and will develop hypotheses that may inform program improvement.


Data collection for the study will take place on the following schedule:


Data Collection Activity

Timing

Parental Consent and Student Assent Forms

Winter 2007 – Winter 2008

Baseline Information Forms

Winter 2007 – Winter 2008

Student Selection Forms

Winter 2007 – Winter 2008

Upward Bound Grantee Survey

Fall 2008

Follow-up Survey of Students

Spring – Fall 2009

School Record Abstractions

Fall 2009

Data Collection for Case Studies

Fall 2010 – Winter 2011


Note that the grantee survey, follow-up survey of students, school record abstractions, and case study protocols are NOT included in this submission.


A.2. Purposes and Uses of the Data

The data collected for the proposed study are needed to address the following five main research questions:


  1. What is the impact of Upward Bound on student outcomes, both overall and for higher-risk students?

  2. What strategies and approaches do grantees follow to promote academic success for higher-risk students?

  3. Does the impact of Upward Bound vary according to grantee practices and characteristics, including grantees’ previous experience with the Upward Bound Participant Expansion Initiative?

  4. Is there a relationship between program impacts and the ratio of higher-risk students to total students that grantees serve?

  5. Is there a relationship between program impacts and control group students’ receipt of services similar to those offered by Upward Bound, such as tutoring, mentoring, after-school and summer enrichment programs, and college application and financial aid assistance?


To address these research questions, the study includes the following data collection activities and instruments:


  1. Parental Consent Form and Student Assent Form (included in this submission). During the student recruitment and intake process for Upward Bound, eligible students applying for entrance to Upward Bound will be given Parental Consent Forms (Attachment 1) and Student Assent Forms (Attachment 2). Both forms will explain the study and what it means to participate. The Parental Consent Form asks for permission to survey the child at baseline and follow-up (approximately 24 months after intake), permission to access the child’s school records, and permission to use the child’s Social Security number to obtain the relevant records. The Student Assent Form is used to ensure that students have some understanding of what it means to be in the study and agree to be in the research sample and cooperate with data collection activities. Students may opt out of the study even if their parents consent.

  2. Student Baseline Information Form (included in this submission). The Student Baseline Information Form (Attachment 3) will collect demographic, attitudinal, and school-related information from each student in the study sample prior to random assignment. The data collected on the Baseline Information Form have multiple uses, including:

    1. As a tool for assessing the integrity of random assignment through a comparison of average baseline characteristics of the treatment and control groups,

    2. As covariates to improve the precision of the statistical modeling of program impacts,

    3. As the basis for defining subgroups of students for later analysis, and

    4. as descriptive statistics for the study sample and to support potential analyses of selection bias due to survey nonresponse.

  1. Student Selection Form (included in this submission). The Student Selection Form (Attachment 4) will be completed by Upward Bound staff for each student included on the study sample, prior to the random assignment of these students. This form will collect the following information: which criterion or criteria were used to determine each student’s eligibility for Upward Bound (low-income status, first-generation college student status, or both); whether the student is included as part of the mandatory population of higher-risk students, and, if so, the specific selection criterion for higher-risk status; and, whether in the absence of random assignment the Upward Bound program would have been more likely or less likely to have selected the student. These data are collected to allow for the identification analysis of key subgroups as defined by Upward Bound program rules or by staff categorization or assessment. In particular, these data will permit the estimation of impacts of Upward Bound for the subgroup of higher-risk students identified by OPE as in greatest need of Upward Bound services, as well as for the subgroup of preferred students identified by grantees themselves as the population they would serve in the absence of random assignment.

  2. Upward Bound Grantee Survey (NOT included in this submission request). The Upward Bound Grantee Survey will be used to learn about the specific administrative, structural, and service choices made by grantees. This information will be used to describe the grantee programs included in the study sample and to determine the statistical relationship between program strategies and student impacts.

  3. Follow-up Survey of Students (NOT included in this submission request). The Follow-up Survey will be administered to students in the treatment and control groups approximately 24 months after random assignment. The survey will collect information about key student outcomes, including, for example: expectations and plans about post-secondary education; college preparation activities such as test taking; participation in non-academic school activities; preferences regarding social activities and friends; other services related to preparation for post-secondary education. The survey will be used to estimate program impacts on key student outcomes and to document the use of other services similar to Upward Bound by both treatment and control group students.

  4. School Record Abstractions (NOT included in this submission request). School Record Abstractions will be used to collect selected data from students’ school records at approximately 24 months after random assignment. Among the data to be collected are, for example: courses taken, attendance and behavior records, grades, scores on objective statewide or other academic performance tests. The data will be used to estimate impacts on key student outcomes.

  5. Case Study Protocols (NOT included in this submission request). The study will conduct case studies of approximately 20 Upward Bound grantees to investigate the likely explanations and empirical basis for the statistical associations found between program structures and practices and student impacts. The data collection activities for the Case Studies will include: open-ended interviews with Upward Bound grantee management and staff; open-ended interviews with other key informants; observations of Upward Bound activities; focus groups of treatment and control group students.


A.3. Use of Information Technology To Reduce Burden

The available use of automated information technology is limited in this study. The study will rely on baseline data and follow-up data gathered from self-administered surveys of eligible applicants to Upward Bound programs. Baseline surveys will be completed during the application process and follow-up surveys will be administered to groups of study participants at students’ schools. Upward Bound staff will complete student selection forms during the application process. To the extent possible, procedures for completing these forms and surveys will be integrated into a grantee’s own application process to minimize the burden to parents, students, and grantee staff. Where school records are automated, those data will be downloaded for use by members of the evaluation team.


A.4. Efforts To Identify Duplication

Potential duplications of effort are of two general types: addressing research questions already answered and duplicating data collection. As explained above in A.1, program priorities have changed sufficiently to render many of the findings of the previous Upward Bound impact evaluation program dated. Moreover, the previous evaluation did not investigate systematically the relationship between program structures and practices and student impacts. To avoid duplication of effort in data collection, the student baseline and follow-up surveys will not ask for information available on school records.


A.5. Small Business

This information collection will not have a significant economic impact on a substantial number of small entities.


A.6. Consequences of Not Collecting the Information

Failure to collect the information proposed in this request will prevent the U.S. Department of Education from assessing the degree to which the new priorities for the Upward Bound program have led to improvements in targeted student outcomes or from learning systematically which program approaches are associated with, and potentially lead to, positive impacts.


A.7. Special Circumstances Justifying Inconsistencies With Guidelines in 5 CFR 1320.5

There are no special circumstances required for the collection of information.


A.8. Consultation Outside the Agency

Comment period notices will solicit public comments from the Federal Register. Any comments received during the comment period will be addressed prior to submission to OMB for final approval.


The study will employ a technical work group (TWG), consisting of eight recognized authorities on experimental impact studies, educational opportunity programs for at-risk youth, and college preparatory programs. The following individuals have agreed to serve on the TWG:


  • Thomas Kane, Harvard University;

  • Fritz Sheuren, National Opinion Research Corporation;

  • Howard Bloom, MDRC;

  • Rebecca Maynard, University of Pennsylvania;

  • Laura Perna, University of Pennsylvania;

  • Sylvia Hurtado, UCLA; and

  • Patricia McDonough, UCLA.


A.9. Payments or Gifts to Respondents

No payments or gifts are planned to survey respondents for the baseline survey or for the student selection form. However, to help compensate grantee staff for the substantial time (10 hours) expected to complete selection forms for approximately 40 students per grantee, payments of $50 per grantee are planned. The payments are intended to ameliorate in part the extra burden imposed by data collection for the study. Public comments to ED in response to the new priority for the Upward Bound program had raised concerns about the burdens of cooperation with the study,4 and this payment, within the bounds of payments offered to sites in other evaluations, addresses this concern.


A.10 Assurance of Data Privacy

Abt Associates and its subcontractors will follow procedures for ensuring and maintaining data privacy, consistent with the Family Educational Rights and Privacy Act of 1974 (20 USC 1232g), the Privacy Act of 1974 (P.L. 93-579, 5USC 552a), The Freedom of Information Act (5 USC 522), and related regulations, including but not limited to: 41 CFR Part 1-1 and 45 CFR Part 5b and the Federal common rule or ED final regulations on protection of human research subjects. In particular, the Education Sciences Reform Act of 2002, Title I, Part E, Section 183 requires “All collection, maintenance, use, and wise dissemination of data by the Institute” to “conform with the requirements of section 552 of title 5, United States Code, the confidentiality standards of subsection (c) of this section, and sections 444 and 445 of the General Education Provision Act (20 U.S.C. 1232g, 1232h).” These citations refer to the Privacy Act, the Family Educational Rights and Privacy Act, and the Protection of Pupil Rights Amendment.


In addition, for student information, “The Director shall ensure that all individually identifiable information about students, their academic achievements, their families, and information with respect to individual schools, shall remain confidential in accordance with section 552a of title 5, United States Code, the confidentiality standards of subsection (c) of this section, and sections 444 and 445 of the General Education Provision Act. Subsection (c) of section 183 referenced above requires the Director of IES to “develop and enforce standards designed to protect the confidentiality of persons in the collection, reporting, and publication of data”. Subsection (d) of section 183 prohibits disclosure of individually identifiable information as well as making any the publishing or communicating of individually identifiable information by employees or staff a felony.


Data to be collected will not be released with individual student or grantee identifiers. Data will be presented in aggregate statistical form only. All study staff involved in collecting, reviewing, or analysis of individual-level data will be knowledgeable about data security procedures and will be prepared to describe them in full detail to respondents. Respondents will be assured that all information identifying them or their Upward Bound program will be kept private to the extent allowed by law. The privacy procedures adopted for this study during all rounds of data collection, data processing, and analysis include the following:


  • All study respondents will be assured that strict rules will be followed to protect their privacy. The reports prepared for this study will summarize findings across the sample and will not associate responses with a specific school or individual. We will not provide information that identifies schools or individuals to anyone outside the study team, except as required by law.

  • To ensure data security, all individuals hired by our contractor, Abt Associates Inc. are required to adhere to strict standards and sign an oath of confidentiality as a condition of employment. Abt’s subcontractors will be held to the same standards.

  • Hard-copy data collection forms will be delivered to a locked area for receipt and processing. Abt Associates Inc. maintains restricted access to all data preparation areas (i.e., receipt, coding, and data entry). All data files on multi-user systems will be under the control of a database manager, with access limited to project staff on a “need-to-know” basis only.

  • Individual identifying information will be maintained separately from completed data collection forms and from computerized data files used for analysis. No respondent identifiers will be contained in public use files made available from the study, and no data will be released in a form that identifies individual grantee staff, service providers, program participants, or comparison group members.

  • All study staff with access to individually identifiable data (such as school records, for example) will go through security clearance as mandated by the Department of Education.


Because, the Privacy Act of 1974 applies to this collection, a Notice for a New System of Records will be prepared for submission to the Federal Register.


A.11. Questions of a Sensitive Nature

The baseline student survey does not include any questions of a sensitive nature. Nonetheless, student respondents will be given the opportunity to refuse to answer any question that causes discomfort.



Respondent Burden Estimates

Informant/ Instrument

Number of Responsesa

Mean Time per Response (Hours)

Total Respondent Time (Hours)

Estimated Hourly Wage

Estimated Cost to Respondents

Parents






Consent Forms

3,600

1/4

900

$15.67b

$14,103

Students






Assent Forms

3,600

1/4

900

--

--

Baseline Forms

3,600

1/3

1,200

--

--

Program Staff






Student Selection Forms (40 per grantee)

90

10 (1/4 hour per student for 40 students)

900

$19.62c

$17,658

Total

10,890


3,900


$31,761

Annual

10,890


3,900


$31,761

a Total respondents equal 7,290, of whom 3,600 students will complete two distinct forms.

b Average hourly wage for all workers in private industry in 2004, from 2006 Statistical Abstract of the U.S., Table 628.

c Average hourly wage for lecturers at public and private post-secondary educational institutions in 2005, from 2006 Statistical Abstract of the U.S., Table 282. Hourly average is computed from average annual salary, based on a work year of 2080 hours.


A.12. Estimate of Response Burden

The table above presents our estimates of response burden for the data collection activities requested for approval in this submission: Parental Consent Forms, Student Assent Forms, Baseline

Information Forms, and Student Selection Forms. Each data collection activity will be administered only once. The time required to complete the consent forms, baseline survey, and student selection forms is based on multiple study staff pretests of the instruments.


A.13. Estimate of Total Capital and Startup Costs/Operation and Maintenance Costs to Respondents or Record-Keepers

There are no annualized capital/startup or ongoing operation and maintenance costs involved in collecting the information. Other than the opportunity costs represented by the time to complete the surveys, there are no direct monetary costs to respondents.


A.14. Estimates of Costs to the Federal Government

The estimated cost to the Federal Government for the data collection activities included in this request for approval is $1,494,583. This cost estimate includes: instrument development and pretesting; staff training; site recruitment; implementing random assignment; collecting data; editing, key entry, and data processing. The cost of data collection for the entire study is $3,719,998—that is, $743,997 per

Annual Costs to the Federal Government of Data Collection

Year of Study

Annual Cost

Year 1

$1,494,583

Year 2

890,883

Year 3

879,692

Year 4

216,296

Year 5

238,544

year over 5 years of the study. The table above documents annual costs for data collection. The highest data collection costs are anticipated during Year 1, because of the need to sample sites, obtain parental consent and student assent, and collect baseline data on students prior to random assignment.


A.15. Changes in Burden

There is no change in burden since this request is an extension of the currently approved collection.


A.16. Plans for Analysis, Tabulation, Publication, and Schedule

Analysis and Tabulation

Overall Analytic Approach


This study will rely on the random assignment of students to either an Upward Bound treatment group or to a control group that will not receive Upward Bound services. Random assignment will produce treatment and control groups that are comparable in every way except for access to the Upward Bound program. We will then compare treatment and control group outcomes, controlling for covariates, and interpret any statistically significant differences in outcomes as impacts attributable to participation in the program. As detailed in Section B under separate cover, the evaluation will estimate impacts for treatment group students as a whole, as well as for the following subgroups that might indicate the settings and students for which the program is most effective:


  • Students defined as “higher academic risks” under the new Upward Bound priorities defined by OPE;

  • Students identified by Upward Bound program operators as those who would have been enrolled in the absence of random assignment (preferred students from the perspective of individual grantees);

  • Students enrolled in Upward Bound programs operated by grantees that also operate Talent Search programs (which would offer college preparation services to control group students but at a substantial lower cost and intensity);

  • Students enrolled in Upward Bound programs operated by 4-year post-secondary institutions; and

  • Students enrolled in Upward Bound programs operated by 2-year post-secondary and non-academic institutions (which are less likely to offer exposure to life at a four-year college or university).


Accounting for the Nested Nature of the Data. The estimation of program impacts on students is relatively simple and straightforward; our estimation, however, must account for the fact that students selected for the evaluation are nested within Upward Bound programs. Therefore, we will use multi-level modeling to parse out the variance between students and grantees to produce both more precise point estimates of impact and appropriate standard errors (Raudenbush and Bryk, 2002).


Using Covariates to Improve Precision. We plan to use student-level and potentially grantee-level control variables to improve precision and thereby detect smaller effects from Upward Bound. The covariates explain within-group variation in outcomes, reducing the amount of variance available for the treatment indicator to explain, and in turn increasing power. Potential control variables include, for example, student minority status, gender, age, and student expected years of education.


Attrition. Should there be attrition in the research sample, either overall or differential attrition between the treatment and control groups, it would call the unbiased nature of the impact estimates into question. We will conduct a series of comparisons to assess the extent to which attrition has resulted in (1) a sample that differs from the original sample selected (i.e., the treatment group on which data were collected is systematically different from the treatment group originally selected) and (2) two groups that are no longer comparable (i.e., the treatment and control groups on which data were collected are systematically different). To test for the possibility of non-comparable groups due to overall or differential attrition, we will compare students from our analytic sample (i.e., survey respondents) on a set of baseline characteristics. Differences will be tested using t-tests for continuous measures and Chi-squares for categorical variables.


To address any attrition bias detected, we will re-weight the data for the sample of survey respondents based on nonexperimental models of what lies behind the attrition process. One such model uses a two-stage procedure to estimate a logistic regression using baseline characteristics to predict the probability of attrition. This equation is of the form:


Prob(attrition) = f (baseline characteristics) (1)


Each case in the completed survey sample is then weighted by 1/(1-p) where p is the predicted probability of attrition for that case, given its baseline characteristics. The weights allow observations with characteristics that are strongly associated with attrition to “count more” in the analysis, making up for their relative scarcity in the data. When differential attrition (i.e., treatment versus control) is a possibility, this approach works best when applied independently to the treatment and control group samples. As an additional test of the effects of attrition, we will use extant data on students and schools to determine whether those participants that attrite differ on measures like average achievement at follow-up.


Impact Analysis of Upward Bound


In this section, we discuss the approach to answering the main research question of the study:


  • What is the impact of Upward Bound on student outcomes, both overall and for higher-risk students?

Estimating Intent-to-Treat (ITT) and Treatment-on-the-Treated (TOT) Effects. Our plan for the impact analysis is designed to provide estimates of both the impact of the opportunity to participate in Upward Bound (intent-to-treat or ITT impact estimates) and the impact of actual participation in the program (treatment-on-the-treated, or TOT). We expect that some proportion of students offered Upward Bound services will ultimately choose not to participate in the program.5 On the one hand, excluding these students from the analysis entirely would introduce bias, as there is no way to determine which matching students from the control group should also be dropped from the study. On the other hand, including these students in the analysis will understate the true average effect of participation in Upward Bound on students, since those students who choose not to participate are included in the analysis but effectively receive no treatment and hence experience zero effects. The ITT analysis will answer the policy question, “What is the impact of Upward Bound on average for students who are offered the program?” We will estimate ITT effects first for all students offered Upward Bound services and then separately for higher-risk students in a subgroup analysis (see below for details). We will provide ITT impact estimates on outcomes including the receipt of tutoring, participating in summer enrichment programs, high school courses taken, high school grades, scores on state achievement tests (when available at the high school level), participation in college preparation activities including tests required for college admissions, graduation and dropout behavior, and other outcomes.


We will also provide estimates of the impact of the treatment-on-the-treated (TOT), which indicate the effects of actually participating in the program. These differ from ITT effects because not all treatment group members—students offered the chance to participate in Upward Bound at participating projects—will choose to participate.6 We will estimate TOT effects for all of the groups and outcome variables for which we will estimate ITT effects.


Below we describe our approach to these analyses in more detail.


Estimation of ITT Effects. We will use a two-level multi-level model7 to investigate the overall impact of Upward Bound on all students offered Upward Bound services. The following is the

prototypical model we will use, with student achievement as an example outcome measure, and including the control variables listed above:


Level-1:


(1)


Level-2:

(2)


where the level-1 factors are defined as:


= the achievement test score of the ith student in the jth program, at the end of their 10th grade school year;

= an indicator variable denoting whether the ith student in the jth program is a treatment student ( = 1) or a control student ( = 0);

preij = the baseline achievement test score for the ith student in the jth program;

Minorityij = 1 if the ith student in the jth program is non-white, and 0 otherwise;

femaleij = 1 if the ith student in the jth program is female, and 0 otherwise;

= age in months at the time of random assignment of the ith student in the jth program;

= the expected years of education at baseline for the ith student in the jth program; and

= the student-level residual of the ith student in the jth program. The assumed distribution of these residuals is normal, with mean = 0, and variance = .


At level-1, the coefficient can be interpreted as the estimated average ITT impact of Upward Bound at program j. Each program, thus, has its own average treatment effect, which is simply the difference in average achievement between the treatment and control groups at that site, while controlling for the other covariates in the model. In the second level of the model (Equation 2), —Upward Bound’s impact in program j—varies around a grand (overall) mean, , with independent errors assumed to be normally distributed, with mean = 0 and variance = . This model also assumes that the ’s are independent of . The parameter represents the overall average ITT impact of Upward Bound across all programs. The term represents the difference between the impact of Upward Bound in the jth program and the overall impact estimate. We will use the standard error associated with to calculate a t-statistic and test the null hypothesis that Upward Bound has no effect on those offered the program. If is statistically significant and positive, we will reject this null hypothesis, and conclude that across programs, the opportunity to participate in Upward Bound had a positive impact on those students.


To test this same null hypothesis in higher-risk students, the simplest approach is to fit the model specified above to just the sample of students identified as high-risk. In this subgroup model, would then represent the overall average ITT impact of Upward Bound across programs for higher-risk students. A second approach uses an adapted model to directly test whether the difference in the ITT impact of Upward Bound for higher-risk versus lower-risk students is statistically different from zero. This adapted model includes an indicator variable and interaction term at level-1, and is specified as follows:


(3)


(4)


where


= an indicator variable denoting whether the ith student in the jth program is a lower-risk student ( = 1) or a higher-risk student ( = 0).


In this adapted model, the coefficient is now directly interpreted as the average impact of Upward Bound in program j on higher-risk students offered the program, while controlling for the covariates in the model. The coefficient , in turn, is interpreted as difference in the average impact of Upward Bound on the lower-risk students offered the program versus higher-risk students offered the program for program j. At level-2, these program-level impact estimates vary around overall mean impacts and . This allows us to directly test the null hypothesis that Upward Bound had no effect on those higher-risk students offered the program. If the coefficient is statistically significant and positive, we will reject this null hypothesis, and conclude that across programs, Upward Bound has a positive impact on higher-risk students. In addition, this model allows us to directly test the null hypothesis that the impact of Upward Bound for lower-risk students offered the program is not statistically different from the impact of Upward Bound for higher-risk student offered the program. If is statistically significant, we will reject this null hypothesis, and conclude that across programs, the impact of Upward Bound on lower-risk students is significantly different from the impact on higher-risk students.


Estimation of TOT Effects. As mentioned above, ITT impact estimates likely understate the true average impact of participating in Upward Bound since they include in the treatment group students who did not participate in the program, and therefore are expected to experience zero impacts. Further, few program resources are spent on no-shows, making it useful to understand the impact of the program on those who actually received services. Therefore, it is useful to also estimate impacts on those students who actually participated in the program. Bloom (1984) developed a statistical correction by which we adjust the estimate of impact on the entire treatment group (i.e., the effect of the availability of Upward Bound, or the ITT impact estimate) to get the average impact on participants (the TOT estimate). The adjustment divides the average impact on the entire treatment group by the proportion of students who are participants (1-r), where r is the nonparticipation rate, yielding an unbiased estimate of the average impact on participants in Upward Bound. This correction is used to adjust impact estimates for all students receiving Upward Bound. The only assumption needed here is that the program has no impact on students who did not receive services, which seems quite reasonable in this case. The adjustment does not require any assumptions about similarities between those who drop out and those who participate.


Describing and Assessing Upward Bound Program Variations


In this section, we discuss the following key research questions, which specifically call for a non-experimental exploration of the relationships between impacts and characteristics of Upward Bound grantees:


  • What strategies and approaches do grantees follow to promote academic success for higher-risk students? For example, grantees may adopt particular recruiting practices, a mix of services, or offer financial incentives to students in an effort to increase retention in the Upward Bound program and reward academic achievement.

  • Does the impact of Upward Bound vary according to grantee practices and characteristics, including grantees’ previous experience with the Upward Bound Participant Expansion Initiative?

  • Is there a relationship between program impacts and the ratio of higher-risk students to total students that grantees serve?

  • Is there a relationship between program impacts and control group students’ receipt of services similar to those offered by Upward Bound, such as tutoring, mentoring, after-school and summer enrichment programs, and college application and financial aid assistance?


To take account of the statistical relationship between program characteristics and program impacts, we propose a three-phase approach. First, we will use the grantee surveys to specify and systematically measure key variations in program approaches and design. Second, we will use statistical techniques to assess relationships between key program practices and characteristics and estimated impacts for student participants, thereby addressing research questions about variations in program impacts. Finally, we will use the case studies to examine how practices and characteristics associated with higher program impacts may actually work in practice, and will develop hypotheses that may inform program improvement. We discuss below our approach to describing and assessing variations in Upward Bound programs and their impacts.


Phase One: Approach to describing how programs vary by strategies and approaches to promote academic success, especially among higher-risk students

In this section, we describe the proposed approach to collecting information on grantees to understand how Upward Bound is implemented. With over 700 regular Upward Bound programs across the country, instruction and services that these programs can offer can be as diverse as instruction in math, science, composition, literature, and foreign language, to test-taking and study skills. Other services that Upward Bound programs may provide include, for example:

  • Instruction in reading, writing, study skills, and other subjects necessary for success in education beyond high school;

  • Academic, financial, or personal counseling;

  • Exposure to academic programs and cultural events;

  • Tutorial services;

  • Mentoring programs;

  • Information on postsecondary education opportunities;

  • Assistance in completing college entrance and financial aid applications;

  • Assistance in preparing for college entrance exams; and

  • Work study positions to expose participants to careers requiring a postsecondary degree.8


In addition to differences in services provided, Upward Bound programs may also vary along the following dimensions:


  • Service intensity—length of time students are provided the specific services described above.

  • Management structure—variations in host institution, relationship to participating schools, other institutional partners.

  • Recruitment—variations in recruitment strategies in general, and for higher-risk students in particular.

  • Services and academic offerings—in addition to differences in available services, sites may vary in which activities or services are mandatory or optional. Sites may also provide special offerings for higher-risk students.

  • Retention strategies—variations in strategies to engage and retain students, particularly for higher-risk students.

  • Student composition—variations in student composition by race/ethnicity, gender, level of academic risk, socioeconomic status and parental education.


The first step is to develop and implement a grantee survey to describe the various strategies used by Upward Bound grantees to provide services to students. In analyzing grantee survey results, we will largely use univariate statistics to illustrate the distribution of relevant characteristics across the Upward Bound programs in the study sample.


Phase Two: Approach to assessing the relationship between program impacts and variation in Upward Bound program’s practices and characteristics, student composition, and receipt of supplemental coursework and services

While the strength of any random assignment evaluation is in the experimental impact analysis, nonexperimental analyses are often useful in interpreting the experimental results to help inform program development. Research questions about program variation (research questions 3 through 5) are of a nonexperimental nature because students are not randomly assigned to specific variations in program characteristics (e.g. students randomized into programs with low-, moderate-, or high-percent higher-risk student composition). For example, it could be that student composition is correlated with other unmeasured site characteristics and that the latter are the actual causal drivers. The Upward Bound program is based on a well-defined program model that is operationalized differently across projects. We plan to exploit this variation using nonexperimental techniques to uncover lessons for program implementation and future program development to improve college access for disadvantaged students. In this section, we describe the proposed methods to assess relationships between the presence of key program practices and characteristics and estimated impacts for student participants. This phase addresses three key research questions stated in the SOW: Does the impact of Upward Bound vary according to (1) grantee practices and characteristics; (2) the proportion of higher-risk students served; and (3) control group students’ receipt of services similar to those offered by Upward Bound?


Host Characteristics and Project Offerings. The effectiveness of particular Upward Bound projects may vary depending on the characteristics of the host institution, such as type of institution—two-year college or four-year college or university—the quality of the institution, and prior experience with serving higher-risk students supported through the Upward Bound Participant Expansion Initiative. The effectiveness of particular projects may also vary depending on the program requirements and offerings, such as the extent of tutoring and the academic courses offered or required during the six-week summer session.


Peer Composition. We propose to examine the relationship between the effectiveness of different Upward Bound projects and the proportion of participants classified as higher-risk students in the data reported by each grantee to ED. Some grantees have expressed concern about the possibly negative consequences, through peer effects, of serving “too many” higher-risk students as opposed to motivated students with higher grades and test scores. This analysis may shed light on the change in impacts we might expect with Upward Bound’s increased focus on higher-risk students.


Supplemental Coursework and Services. We propose to examine the types of services, like tutoring and summer enrichment, received by the treatment group and the control group, and also the relationship between service receipt and program impacts. In addition to having access through other college preparation programs such as Talent Search, students may also receive college preparation services within their own high schools or through district-sponsored summer programs.


Analytic Approach

Our basic analytic strategy involves first estimating student impacts across all projects, as described in the previous section. We will then explore the relationship between these estimates and (1) characteristics of host institutions and specific program offerings, (2) composition of the students, and (3) the difference in service receipt and supplemental academic coursework between treatment and control students. To measure these three factors across programs, data will be gathered by the grantee survey and other available documents. At the student level, we will have information about students’ high school, family and personal background, peer groups, and outcomes through the student surveys (both baseline and post-test) and school transcripts.


Next, we will use hierarchical linear modeling (HLM) to assess the relationship between these factors and variation in program impacts. This relationship is estimated by simply including the factors as covariates at level 2 in Equation 2 as specified in the previous section on impacts. This strategy allows for the decomposition of observed relationships between variables into separate level-1 and level-2 components, and takes into account aggregation bias, misspecification of standard errors, and heterogeneity of regression (Raudenbush and Bryk, 2002). Recall that Equation 2 is specified as:


(2)


Significant variation among the treatment effects across the Upward Bound programs, as denoted with the term 1j, would signify that there are differential treatment effects across programs. We can then test whether this variation is systematically related to any of the factors mentioned above, by entering a program-level covariate into Equation 2. For example, to estimate whether there is a relationship between variation in program impacts and the proportion of higher-risk students served, we will expand Equation 2 as follows:


(5)


In this estimation of the level-2 model, varies around a grand (overall) mean, . The coefficient estimates the relationship between the covariate, peer composition (proportion of higher-risk students), and program impacts. If this coefficient, for example, is negative and statistically significant, we would conclude that, on average, programs serving a higher proportion of higher-risk participants are associated with lower program impacts, as compared to programs serving a lower proportion of higher-risk participants. This strategy can be used to look at the relationship between any program level variable and variation in program impacts.


Phase Three: Approach to describing how promising program design and practices work

In Phase Three of this part of the study, we will conduct case studies to understand in a more nuanced way the strategies and approaches grantees follow to promote academic success, particularly for higher-risk students. Much of the descriptive information for the case studies will come from open-ended interviews with program management and staff, as well as interviews with counselors and other staff from partner high schools. We will also conduct focus groups with treatment group and control group students in each case study site. We will also collect relevant administrative data about program operations and results. We will conduct structured observations of important program activities and will review all relevant documentary material.


Our approach involves three elements: (1) purposefully sampling Upward Bound programs/sites based on program characteristics associated with impacts or higher impacts; (2) conducting open-ended interviews structured around the results of the impact analyses conducted under Phase Two; and (3) conducting in-depth interviews with control group students. Through these elements, the case studies will develop comprehensive descriptions of how the programs operate and will develop hypotheses about why the identified characteristics or practices may be associated with larger impacts for students.


First, we propose to purposively select 20 programs that have some or all of the program characteristics identified in Phase Two as being associated with impacts or higher student impacts, and conduct case studies at those sites.


In choosing sites for the case studies, we will first want to include sites that cover the range of practices or other characteristics that are associated with higher impacts. Assuming that there will be more sites than identified practices, we will want to vary other factors in choosing sites, such as institutional or environmental context (e.g., community college or four-year college or university, urban vs. rural setting, private vs. public institution), program age, or other features that are not within the discretion of program designers or administrators to change. We will adopt this strategy not because it would somehow make the case study sites representative of any particular approach or group of approaches, but to try to include a variety of realizations of those approaches. Given some variation in program setting, we might be able to develop hypotheses about how and why the same approach may differ or work better in different program realizations.


Second, in developing hypotheses about why specific practices or groups of practices or structures might lead to better impacts, we will rely first on the theories of program stakeholders. In simplest terms, following the clearance of evaluation findings for public release, we will inform a group of stakeholders of the results of the impact analysis and ask them whether they agree with, or are surprised by, the findings regarding program characteristics associated with better impacts. In either case, we will ask informants why they believe as they do.


Third, it will be critical to collect detailed information about control group activities, to better understand program impacts. It should be remembered that all impacts are relative, as they are the observed difference between what happened in the program and what would have happened in the absence of the program, or the experiences of the control group. If what would have happened without the program is identical or similar to what happened within the program, no impacts would be expected to arise. Without a comprehensive and accurate accounting of control group services, it may be difficult to distinguish among Upward Bound programs that are good or poor performers on their own, and those that simply do or do not offer added value to control group services. Therefore it is critical to use the student follow-up survey and the case studies to understand the type, level, and relative quality of similar services provided to control group students.


It is also important to note that the case studies are designed primarily to describe how programs work and to develop hypotheses about how various program characteristics affect impacts; the case studies will not also test those hypotheses. By their very design, case studies are intensive examinations of individual cases. Although cross-site analyses may be used to compare sites on various factors, results may not be generalized. An important purpose of the case studies is to provide some of the richer, qualitative texture of how programs work that cannot be captured in the grantee survey or in statistical analyses of program data.


Approach to Tabulated Results

For the study, we will generally present results in the following format:


Illustrative Table of Impact Findings


Regression-Adjusted Value



Treatment Group

Control Group

Estimated Impact
(p-level)

Math courses taken




AP courses taken




Percentage dropping out




Percentage expecting to attend college




Percentage being tutored




Etc.




* Significantly different from zero at the .10 level

** Significantly different from zero at the .05 level

*** Significantly different from zero at the .01 level


In addition to showing regression-adjusted values of outcomes for the treatment and control groups, the impact findings will include tabulations of unadjusted means for each group, as well as of the effect size detectable for each outcome both with and without regression adjustments. The provision of p-values for each significance test will permit analysts to account for multiple comparisons in hypothesis testing, per standards being developed by ED’s What Works Clearinghouse.


Publication and Study Schedule

The schedule for key study activities and published reports is presented below.


Study Schedule

Key Activity or Report

Timing

Final Study Design

10/2006 – 1/2007

Develop plans for sampling sites and baseline data collection random assignment

10/2006 – 1/2007

Design random assignment in study sites

1/2007 – 5/2007

Conduct baseline data collection and random assignment

3/2007 – 1/2008

Develop grantee survey, follow-up survey, school record abstraction form

6/2008 – 9/2008

Conduct grantee survey

10/2008 – 12/2008

Conduct follow-up survey of students and school record abstraction

6/2009 – 12/2009

Final Impact Report

9/2010

Case studies

10/2010 – 1/2011

Final Case Study Report

9/2011

References

Bloom, H. S. (1984). Accounting for No-Shows in Experimental Evaluation Designs. Evaluation Review, 8(2), 225-246.

Bryant, A. L., & Zimmerman, M. A. (2002). Examining the effects of academic beliefs and behaviors on changes in substance use among urban adolescents. Journal of Educational Psychology, 94(3), 621-637.

Johnson, D. W., Johnson, R. T., & Stanne, M. B. (1989). Impact of goal and resource interdependence on problem-solving success. Journal of Social Psychology, 129(5), 1621-1629.

Johnson, D. W., Johnson, R. T., Tiffany, M., & Zaidman, B. (1983). Are low achievers disliked in a cooperative situation? A test of rival theories in a mixed ethnic situation. Contemporary Educational Psychology, 8(2), 1189-1200.

Kandel, D. B. (1996). The parental and peer contexts of adolescent deviance: An algebra of interpersonal influences. Journal of Drug Issues, 26(2), 289-315.

Moore, M., Fasciano, N., Jacobson, J., & Myers, D. (1997). A 1990s view of Upward Bound: Programs offered, students served, and operational issues. Background reports: Grantee survey report and target school report. U.S. Department of Education Planning and Evaluation Services, May.

Myers, D., et al. (2004). The Impact of Regular Upward Bound: Results from the Third Follow-up Data Collection. U. S. Department of Education Policy and Program Studies Service.

Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical Linear Models: Applications and data analysis methods (Second ed.). Newbury Park: Sage.

Raudenbush, S. W., Bryk, A., Cheong, Y. F., & Congdon, R. (2000). HLM5: Hierarchical Linear and Nonlinear Modeling. Lincolnwood, IL: Scientific Software International.

A.17. Approval To Not Display Expiration Date

No exemption is requested.


A.18. Exceptions to Item 19 of OMB Form 83-1

The submission describing data collection requires no exemptions to the Certificate for Paperwork Reduction Act (5 CFR 1320.9).


1 Of these students, about 56,000 were high school students served by 761 regular Upward Bound (UB) programs, and about 5,000 were high school graduates or GED recipients served by 39 Veterans Upward Bound (VUB) programs. While UB and VUB programs differ, they are funded through the same grant allocation. For more information see http://www.ed.gov/programs/trioupbound/index.html.

2 Most summer programs are residential programs on college campus. Services provided during the school year are offered either on college campus or in the target high school, depending on the service and the grantee. See Moore et al. (1997).

5 A previous evaluation found that 20 percent of students in the treatment group did not show up to participate in the program.

6 It may also be true that not all the students assigned to the control group stay out of the Upward Bound program; some number of “crossovers” into the intervention arise in almost every experimental control group that relies on individual behavior. A number of techniques have been developed for dealing with crossovers at the analysis stage, each intending to neutralize any impact program participation may have had on their outcomes. We expect there to be few crossovers and thus not to have to adjust for them. If this proves incorrect, we will select the best techniques based on the circumstances that have led to crossovers.

7 If the outcome variable is linear, we will use HLM. For categorical outcomes, hierarchical general linear modeling (HGLM) or multi-level logit models will be used. Other applications include using a hierarchical multivariate liner model (HMLM), which creates a latent student-specific but time-varying outcome measure at level-1; level-2 becomes the student level, and level-3 becomes the program level.

8 See http://www.ed.gov/programs/trioupbound/index.html


Abt Associates Inc. Table of Contents 0

File Typeapplication/msword
File TitleSupporting Statement for Paperwork Reduction Act Submission to OMB: Part A
Authorjonathan.jacobson
Last Modified ByDoED
File Modified2007-07-11
File Created2007-07-11

© 2024 OMB.report | Privacy Policy