1850-0912 Supporting_Statement_Part_B revised post comment period

1850-0912 Supporting_Statement_Part_B revised post comment period.docx

Study of Enhanced College Advising in Upward Bound

OMB: 1850-0912

Document [docx]
Download: docx | pdf



P art B: Supporting Statement for Paperwork Reduction Act Submission




Study of Enhanced College Advising in Upward Bound






Prepared for:

Marsha Silverberg

U.S. Department of Education

555 New Jersey Ave, NW

Room 502I

Washington, DC 20208-5500




Submitted by:

Abt Associates Inc.

55 Wheeler Street

Cambridge, MA 02138



Part B: Supporting Statement for Paperwork Reduction Act Submission

Table of Contents



  1. Collection of Information Employing Statistical Methods

This supporting statement updates the burden request under OMB control number 1850-0912, approved on 8/8/2014, by including estimates of response rates for the Phase II data collection (Section B.1.3), the plan for pilot testing the Phase II data collection (Section B.4), and the Phase II data collection instruments and related communications (Appendices G through J). The previous submission included the statistical details for the full study.

Introduction

The U.S. Department of Education (ED) will fulfill a congressional mandate to assess the effectiveness of a promising practice in its long-standing Upward Bound (UB) program by conducting a research demonstration to enhance college advising in UB.

The study is being sponsored by ED’s Institute of Education Sciences, in collaboration with the Office of Postsecondary Education, and implemented by Abt Associates Inc. and its partners, Decision Information Resources (DIR), Survey Research Management (SRM), and American Institutes for Research (AIR).

Overview of the Upward Bound Program

The UB program is designed to improve college access for students from disadvantaged backgrounds. Coming out of the Economic Opportunity Act of 1964 as part of the War on Poverty, the UB program is the oldest of the Federal TRIO programs. In fiscal year 2014, approximately $265 million was spent to fund 814 UB projects and serve over 61,000 high school participants. 1

Each version of the Higher Education Act, including the most recent 2008 Higher Education Opportunity Act (HEOA) (20 USC 1070A-18), has prescribed major details of the UB program. UB project grantees may include two- or four-year colleges (the vast majority), but also other organizations such as local education agencies, nonprofit organizations, other community organizations, and state education agencies, may also host UB projects. Eligible UB students must come from families with household income below 150 percent of the poverty line or in which neither parent holds a bachelor’s degree, and two-thirds of any project’s participants must satisfy both criteria. Individual UB projects must provide an array of services to participants, who typically enter the program early in high school.

UB projects are required to provide students with:

  • academic tutoring to prepare students to complete secondary or postsecondary courses;

  • guidance on course selection; assistance in preparing for college entrance examinations and completing college admission applications;

  • information on all Federal student financial aid programs, and benefits and resources for locating public and private scholarships;

  • assistance completing financial aid applications;

  • education or counseling services to improve the financial literacy and economic literacy of students or their parents, including financial planning for postsecondary education; and

  • assistance for high school dropouts with secondary school reentry, entry into alternative education programs, and entry into general educational development (GED) programs or postsecondary education programs (20 USC 1070A-13b).

According to grantee-provided data, more than 80 percent of Upward Bound participants attend college within two years of graduating high school,2 with older data suggesting that three-quarters of those students (60 percent overall) enroll in a four-year college or university.3 However, like many low-income students, UB participants may miss opportunities to enroll in more selective colleges and universities that better match their academic capabilities. A previous study of UB found that only 11 percent of UB participants enroll in four-year institutions classified by the Barron’s guide as “most competitive”, “highly competitive”, or ”very competitive” (Seftor, Mamun and Schirm, 2009). Further, more than a third of participants overall attend their host institution (45 percent for those participating in UB programs hosted at a two-year institution), but whether that reflects the best outcome for those students or a lost opportunity is uncertain. While all UB projects provide college advising and application help, there is variation in the emphasis and intensity of these activities and room to improve the “match” or “fit” between UB student qualifications, financial circumstances, and interests and the college in which they enroll.

Overview of the Enhanced College Advising Demonstration

The demonstration will build on advising activities grantees already conduct, but take into account information and approaches emerging from recent rigorous research (Avery, 2013; Hoxby and Turner, 2013; Roderick, Nagaoka, Coca, and Moeller, 2009; Sherwin, 2012; Carrell and Sacerdote, 2013). The intervention is a professional development program for UB staff and a set of tools and resources for them to use in working with students in the spring of their junior year through early senior year. Both the staff training and student tools and resources will focus on the benefits of attending higher quality institutions, the concepts of net costs and completion rates in comparing colleges of interest, the availability of financial aid, other factors to consider in finding a “fit,” and the importance of applying to at least 4 colleges (Smith, 2011), with fee waivers to ensure that household income is not a constraint on the number of applications.

The professional development will include review of emerging research and best practices; introduce key concepts in enhanced college advising; simulate enhanced advising activities with materials, tools, resources; and support staff to develop a plan for implementing enhanced advising strategies. The professional development will be offered in a series of webinars. The intervention builds on lessons learned from recent Hoxby-Turner (2013) research, by providing students with: (1) an illustrative example of colleges customized to their PSAT/SAT and/or ACT/PLAN score and their location, and (2) a list of scholarships and grants available in their state.4 In contrast to the Hoxby-Turner study, where packets were mailed directly to very high-achieving, low-income students, the UB demonstration will address students with a wider range of academic backgrounds and encourage staff support to help students understand and act on these materials.

Overview of the Evaluation

The 2008 Higher Education Opportunity Act (HEOA) (20 USC 1070A-18), requires ED to conduct a rigorous study of a promising practice that has the potential to improve key outcomes for UB participants. At the same time, the law prohibits any evaluation of a TRIO program that would require grantees to “recruit additional students beyond those the program or project would normally recruit” or that would result “in the denial of services for an eligible student under the program or project.” The proposed research demonstration fulfills HEOA’s mandate to examine a promising practice and is consistent with the prohibition against denying students UB services as part of the evaluation. Under the demonstration design, both the treatment and control group projects would continue providing regular UB services. In addition, ED has committed to providing the professional development program to both groups, with the control group projects receiving training after the experimental period is over.

The professional development program will be evaluated using a delayed treatment randomized control trial (RCT) design. This design will ensure that all UB projects that volunteer have access to the college advising intervention at some point. Approximately 200 Upward Bound projects (awarded grants in 2012) will be recruited to volunteer for the demonstration. These projects will be randomly assigned so that those assigned to Wave 1 (treatment) will receive the professional development program beginning in spring 2015.5 Wave 2 (control) projects will receive the professional development program beginning in fall 2016.

Students in both groups who were high school juniors in 2014-15 will be tracked over time to collect administrative and survey data on key outcomes, including college application behavior, college acceptance and matriculation, and receipt of financial aid. ED expects to execute an option to the current contract to also enable collection of longer-term data on college persistence, which is the most important measure of match or fit.

The study will use the data to assess not only whether the intervention is effective, but how well it was implemented and whether its effectiveness depends on key components of the UB program or features of the college advising intervention as it was designed and implemented. In particular, the evaluation is designed to answer three main research questions:

  1. To what extent do the professional development package and tools have an effect— above and beyond the services Upward Bound grantees already provide—on important student outcomes?

  2. How fully was the intervention implemented (e.g., in terms of staff participation in training and staff implementation of the intervention model)? And to what extent did the intervention produce a difference in the Upward Bound college advising provided to treatment and control group students?

  3. Is there variation in the impacts of the enhanced college advising intervention on student outcomes and to what extent is the variation associated with other project features or characteristics of participating students? For example, do impacts vary between projects hosted by two-year institutions and projects hosted by four-year institutions? Are differences in the implementation of the enhanced college advising associated with differences in impacts?

To answer these questions we will conduct both impact and descriptive analyses. The first report, which will address each research question, will be available in 2017, and the second report will be published in 2018. ED expects to issue a later report on persistence impacts in 2020. To minimize costs, the evaluation will rely to the extent possible on easily available administrative data for many of the outcome measures.

Exhibit B-1 presents the research questions along with the data sources for each question, the analytic approach and outcomes of interest.

Exhibit B-1. Evaluation Questions, Data Sources, Analytic Approach, and Outcomes of Interest

Research Question

Data Sources

Analytic Approach

Outcomes of Interest

  1. To what extent do the professional development package and tools have an effect— above and beyond the services Upward Bound grantees already provide—on student outcomes?

  • Student survey data

  • National Student Clearinghouse data

  • Federal Student Aid data

  • NCES IPEDS data

  • College Board and ACT data

  • Impact analysis

  • Sample: 4,000 students in 200 UB projects

  • Number and type of college applications submitted

  • Selectivity of colleges applied to

  • Knowledge of college net costs

  • Knowledge of financial aid options

  • Completion of FASFA

  • Type of college enrolled in

  • Selectivity of college enrolled in

  • Persistence in college


  1. How fully was the intervention implemented (e.g., in terms of staff participation in training and staff implementation of the intervention model)? And to what extent did the intervention produce a difference in the Upward Bound college advising provided to treatment and control group students?

  • UB Project Director survey data

  • Student survey data

  • Descriptive / impact analysis

  • Sample: 4,000 students in 200 UB projects and 200 UB project directors

  • Enhanced college advising experiences

  • Receipt and use of student advising materials

  • Staff knowledge and awareness

  • Staff behaviors and practice

  1. Is there variation in the impacts of the enhanced college advising intervention on student outcomes and to what extent is the variation associated with other project features or characteristics of participating students? For example, do impacts vary between projects hosted by two-year institutions and projects hosted by four-year institutions? Are differences in the implementation of the enhanced college advising associated with differences in impacts?

  • Student survey data

  • PD survey data

  • National Student Clearinghouse data

  • Federal Student Aid data

  • NCES IPEDS data

  • College Board and Act data


  • Impact analysis / moderator analysis

  • Sample: 4,000 students in 200 UB projects and 200 UB project directors

  • Number and type of college applications submitted

  • Selectivity of colleges applied to

  • Knowledge of college net costs

  • Knowledge of financial aid options

  • Completion of FASFA

  • Type of college enrolled in

  • Selectivity of college enrolled in



As explained in Section A.2, there are two phases to this study. Phase I – Random Assignment and Collection of Student Rosters and Student Baseline Surveys (already approved) invovles identification of UB student participants identification via the collection of student rosters and baseline survey data collection from these students. This ICR requests clearance for Phase II –Collection of Follow-Up Student Survey, Project Director Survey, and Administrative Data.

    1. Respondent Universe and Sampling Methods

      1. Respondent Universe

The respondent universe includes those UB projects that volunteer to participate in the demonstration; 200 UB projects will be recruited. An open invitation will be issued to the UB community seeking projects to volunteer to participate in the demonstration; the first 200 UB projects that respond will be included in the demonstration. During Phase I, participating UB projects will be randomly assigned to the treatment or control group. UB projects in the treatment group will implement the intervention with students who are high school juniors in academic year 2014-1, during Wave 1 and those projects assigned to the control group will implement the intervention in Wave 2 with high school juniors participating in UB in 2015-2016.

Exhibit B-2 indicates the estimated population sizes for the evaluation.

Exhibit B-2. Population Size Estimates

Estimated number of participating volunteer UB projects

200 UB projects

Estimated number of UB project directors

1 per project

Estimated average number of UB high school juniors in 2014-2015

20 students per project*

TOTALS PARTICIPANTS

UB Project Directors

UB High School Juniors in 2014-2015



200 Project Directors

4,000 students

*Data from UB historic Annual Performance Reports (APR) suggest that, on average, UB projects serve a cohort of 20 high school juniors in a given school year.

      1. Sampling Methods

The Study of Enhanced College Advising in Upward Bound will include all 200 project directors and all 4,000 students from participating UB projects. Therefore, there is no sampling proposed for this study.

      1. Expected Response Rates

For Phase I data collection, we expect to obtain a 100% response on the student rosters from projects. Projects maintain a list of participating students, by grade, for the purposes of APR reporting. For Phase II, we expect an 80% response rate for the student baseline and follow-up surveys and for the project director survey, in part because we will have the support of the UB project directors and they will be able to encourage survey completion (see section B.3 for more detail).

    1. Statistical Methods for Sample Selection and Degree of Accuracy Needed

      1. Sample Selection

Given the way in which UB projects offer services to participants in groups, and the potential for spillovers across students participating in the same project, random assignment of students within projects to treatment and control conditions was considered to be potentially infeasible—and if feasible, not advisable. Therefore, random assignment will occur at the UB project level. The number of projects is bounded by the resources available and the numbers necessary to conduct robust tests of impact.

During Phase I, in March 2015, the study team will randomly assign one half of the projects that have volunteered for the demonstration to the treatment group (Wave 1 implementers) and one half to the control group (Wave 2 delayed implementers). To the extent possible, the team will stratify the random assignment by project type (e.g. 2-year or 4-year host college) and region to ensure reasonable representation across these important policy factors and to mitigate the possibility of having imbalance between the treatment and controls on these factors simply by chance.

      1. Estimation Procedures

This section presents the study’s estimation approach for addressing the research questions. All estimation will occur during Phase II of the study. Impact and descriptive analyses will be conducted to answer the study research questions, as described here.

  1. To what extent do the professional development package and tools have an effect— above and beyond the services Upward Bound grantees already provide—on student outcomes?

To examine the impacts of the intervention (Question 1), the study will exploit the experimental design to estimate Intent-to-Treat (ITT) effects of the intervention relative to the control condition. These impact effects will be estimated for each of the outcomes (i.e., the number and types of colleges students apply to; the selectivity of the college in which students enroll; financial aid obtained; college costs borne by students and their families; and overall enrollment in postsecondary education). We will test whether each of the impacts is statistically significant to determine if there is convincing scientific evidence that the intervention caused improvements in student outcomes.

In conducting the analysis, the study team will estimate two-level regression models, with students (level-1) nested within Upward Bound projects (level-2), to account for clustering.6 The student level will control for student demographic characteristics (e.g., race/ethnicity, English language learner status) as well as aspects of students’ educational plans collected from the baseline student survey (e.g., whether the student’s first planned postsecondary degree is a two- or four-year degree), and whether or not the student is the first in the family to attend college. The project level will include the treatment indicator, to distinguish between treatment and control projects, indicators for any stratifying variables used in random assignment, and we will control for baseline characteristics of the high schools attended by students in the project, including, for example, the percentage of seniors applying for federal financial aid, average school achievement and the percentage of students eligible for free or reduced-price lunch.

Hierarchical Linear Model

The following hierarchical linear model will be used to estimate program impacts on continuous outcome variables.

The Level-1 (student level) model is:

for i=(1,2,…n) students per project and j=(1,2,..,P) UB projects.

Where, is the value of the outcome (e.g., number of college applications submitted) for the ith student in the jth UB project; is 1 if the ith student in the jth UB project student is the first in the family to attend college and 0 otherwise centered at the grand mean; are E covariates representing educational plans for ith student in the jth UB project (e.g. whether the student’s first planned postsecondary degree is a two- or four-year degree) centered at the grand mean; are K additional covariates representing demographic characteristics of the ith student in the jth UB project (e.g., race/ethnicity, English language learner status) each centered at the grand mean; is the covariate-adjusted mean value of the outcome for control students in the jth UB project; through are regression coefficients indicating the effects of each student-level covariate on the outcome variable ; and is the random effect representing the difference between student ij’s score and the predicted mean score for project j.

The Level-2 (UB project-level) model is:

Where, is the covariate-adjusted mean value of the outcome measure across control UB projects; is the treatment effect i.e. the difference between the covariate-adjusted means of the treatment and control projects; Treatmentj is the treatment status dummy variable with a value of 1 if the jth project is assigned to the treatment group and 0 if assigned to the control group; is a vector of k variables measuring the characteristics of the jth project in school year 2014-2015 prior to random assignment (e.g. the percentage of seniors applying for federal financial aid, the percentage of students eligible for free or reduced-price lunch) each centered at the grand mean; is the deviation of UB project j’s mean from the grand mean, conditional on covariates

The parameter indicates the impact of the demonstration on the outcome. A two-tailed t-test will be conducted to test the null hypothesis of no treatment impact using an alpha level criterion of 0.05. A positive and statistically significant estimate of will indicate that there is compelling scientific evidence (at the 5 percent level) that the demonstration had an impact on the targeted outcome. The parameter indicates the magnitude of the impact -- such that the enhanced college advising in UB projects is estimated to have, on average, a point effect on the specified outcome.

A standardized effect size will be calculated by dividing the estimated impact ( ) by the standard deviation of the outcome variable, , in the control group ( ). The standardized effect size is . The control group standard deviation will be used, as recommended by Burghardt, Deke, Kisker, Puma, and Schochet (2009), rather than the pooled standard deviation, because the intervention might affect the standard deviation in the treatment group.

  1. How fully was the intervention implemented (e.g., in terms of staff participation in training and staff implementation of the intervention model)?  And to what extent did the intervention produce a difference in the Upward Bound college advising provided to treatment and control group students?

Descriptive analyses will be used to provide information on implementation and the fidelity of implementation (Question 2). Information on the overall level of and variation in implementation fidelity will provide important contextual information for interpreting the impact findings. Our proposed implementation analysis will cover: (1) the professional development received by UB staff; (2) the nature of the college advising that is offered to participants, (3) the extent to which participants receive college advising from Upward Bound, (4) the alignment between UB’s college advising and the college advising that projects in the treatment group were expected to provide, and (5) the difference between the college advising received by students in the treatment group and students in the control group.

To capture services offered, we will use program director surveys to obtain information on treatment projects’ rollout of the intervention compared to control group members’ reports of college advising activities provided. To collect data on the services students receive, especially those related to college and financial aid planning and applications, we plan to rely on the student follow-up survey. However, we recognize the limitations of these data; student self-reports may suffer from recall error and potential response biases (e.g., successful students tend to over-attribute their success to mentors and tutors and tend to rate the quality of those services more highly than students who are not as successful). However we do not believe that the issues of self-reporting will be substantially different for students in the treatment versus control group projects and so will consider these unbiased measures unless there is reason to consider them otherwise.

To characterize the difference between the college advising received by students in the treatment group and students in the control group (i.e., the treatment-control contrast), we will conduct an impact analysis using the same methods described earlier in this section. The magnitude of the treatment’ impact on college advising services will be used to characterize the treatment-control contrast.

  1. Is there variation in the impacts of the enhanced college advising intervention on student outcomes and to what extent is the variation associated with other project features or characteristics of participating students? For example, do impacts vary between projects hosted by two-year institutions and projects hosted by four-year institutions? Are differences in the implementation of the enhanced college advising associated with differences in impacts?

An important goal of the study is to identify implementation features or other factors that may influence the impacts of the intervention. To address Question 3 (i.e., variation in impacts), we propose to augment our analysis model with interaction terms and test whether there are statistically significant relationships between the impacts of the intervention and the way in which it was implemented and other site-level characteristics. For example, to test for variation between projects hosted by two-year institutions and projects hosted by four-year institutions in the program impact, the study team will include an interaction between the treatment indicator and an indicator for whether the project is hosted by a four year institution in Equation 2, as follows:

Where, is the covariate-adjusted mean value of the outcome measure (or log-odds of the outcome occurring) among control projects hosted by two-year institutions; is the mean difference in the covariate-adjusted outcome between treatment and control projects (i.e., the treatment impact); Treatmentj is the treatment status dummy variable with a value of 1 if the jth project is assigned to the treatment group and 0 if assigned to the control group; is the difference between projects hosted by four-year institutions and projects hosted by two-year institutions in the covariate-adjusted mean value (or log-odds) of the outcome among control projects; is a host type indicator variable with a value of 1 if the jth project is hosted by a four-year institution and 0 if hosted by a two‑year institution; is the difference in the treatment impact between projects hosted by four‑year institutions and projects hosted by two-year institutions; *Trtj is the host type by treatment condition interaction term; and is a vector of k variables measuring the characteristics of the jth project in school year 2014-2015 prior to random assignment (e.g. the percentage of seniors applying for federal financial aid, the percentage of students eligible for free or reduced-price lunch) each centered at the grand mean.

Using a .05 level criterion, the study team will conduct a test of the null hypothesis that the parameter for the interaction term is zero, i.e., that there is no statistically significant difference in the program impact for the type of host institution. The null hypothesis is expressed as follows:

The study team will interpret a statistically significant difference as evidence that the program impact varies by type of host institution. If the study team finds a statistically significant difference in program impacts, the study team will then test whether the program impact within type of host institution or subgroup is statistically significant. Standardized effect sizes will be calculated as described above. We recognize that subgroup analyses conducted for different types of Upward Bound projects will be underpowered since the study was designed to be adequately powered to detect impacts for the full sample of 200 projects

      1. Degree of Accuracy Needed

This section presents the power analyses conducted to estimate the minimum detectable effect sizes (MDES) for the treatment/control differences on binary and continuous student outcomes.

Binary Outcomes

The MDES estimates for binary outcomes, such as whether participants are applying to college; enrolling in a 4-year college; and submitting a FASFA, are based on the following assumptions:

  1. the demonstration includes 200 projects

  2. 100 projects are assigned to each group

  3. the full sample includes 20 students per project

  4. the student survey samples includes the full sample of 4,000 students

  5. the intra-class correlation equals 0.05, 0.10.

  6. the project-level R-square equals 0.32

  7. for binary outcomes the outcome variable equals 1 for 80 percent of control group members

The assumed ICCs of .05 and .10 bound the estimated ICCs for the outcome of college attendance, which was estimated using historic UB APR data. The estimated project-level R-square of .32 was also estimated from historic UB APR data as the variation in the outcome of college attendance that could be explained by variation across UB projects.

In addition, standard statistical assumptions were used (i.e., two-tailed testing at the 5 percent level and 80 percent power).

The formulas in Schochet (2011) were used for binary outcomes to calculate MDES. These power calculations for binary outcomes provide conservative MDES estimates. The Schochet (2011) paper derives MDE equations and variance estimates that reflect the process of transforming logistic regression estimates into percentage point impact estimates. Because the percentage point impact estimates are the expected difference in probability between the treatment and the control groups, these impact estimates reflect the joint distribution of the outcome and covariates in absence of treatment, in addition to the estimated log-odds treatment effect produced by the logistic regression. To make this relationship tractable, Schochet (2011) assumes a single binary covariate.

Schochet refers to minimum detectible effect sizes for binary outcomes as minimum detectible impacts (MDI) in percentage points:

where:

: significance level for a two-tailed test,

: statistical power;

the cumulative density function of the normal distribution;

the inverse of the cumulative density function of the normal distribution;

: minimum detectible impact in percentage point terms;

: the variance of the estimated impact, which is a function of the MDI and the following terms;

: the probability that the outcome occurs for control group members (e.g. the student enrolls in college);

: the probability that the covariate is one (e.g. the probability that a student is the first in the family to attend college);

: proportion of the student-level variance explained by the covariate;

: total number of projects;

: average number of students per project with a valid outcome measure after accounting for attrition and non-response; and

: the intra-class correlation for the outcome.

Unlike the standard power analysis equation, the equation above cannot be solved for the MDI analytically, as the MDI enters the equation for variance: iterative methods must be used. To find the MDI for a given sample size and power, the study team fixed the number of projects and iterated over MDI until we obtained the desired power.

Row 1 in Exhibit B-3 demonstrates that the MDES will be 4 percentage points for outcomes based on binary outcomes (e.g., applied to college; enrolled in a 4-year college). If ICCs prove to be larger than our assumed ICC of .05, the MDES may be larger. For example, if the ICC is .10, which is consistent with some of the estimates reported in Schochet (2011), the MDE would be approximately 5 percentage points for binary outcomes (see row 2).

Exhibit B-3. MDES Estimates for Binary Outcomes

Intraclass Correlation

Binary Outcomes

(200 projects, each with 20 students)*

.05

4 percentage points

.10

5 percentage points

*Estimates assuming an 80 percent survey response rate (16 students/project)

yield similar MDES estimates.



Continuous Outcomes

Power analyses for continuous outcomes, such as the number of college applications submitted, the amount of financial aid received, and the selectivity of college chosen for enrollment, are based on the following formula (Schochet, 2008):

Where

: estimated minimum detectable effect size for the treatment impact;

: a constant that is a function of the significance level (α), statistical power (β), and the number of degrees of freedom (df);

: projects-level variance in the outcome;

: student-level variance in the outcome;

: amount of project-level variance in the outcome explained by covariates;

: amount of student-level variance in the outcome explained by covariates;

s: number of projects;

p: proportion of projects assigned to the treatment condition;

n: average number of students in each project; and

σ: standard deviation of the outcome measure for the control group.

Using the equation above we make the following assumptions:

  1. the demonstration includes 200 projects

  2. 100 projects are assigned to each group

  3. the full sample includes 20 students per project

  4. the student survey samples includes the full sample of 4,000 students

  5. intra-class correlations (ICC) of 0.05 and 0.10

  6. the project-level R-square equals 0.32

Exhibit B-4 demonstrates that by our estimates, the MDES will be .12 to .14 standard deviations for outcomes based on survey data. The MDES will be at the upper end of this range if ICCs are larger (e.g., ICC=.15), and the MDES will at the lower end of the range if ICCs are smaller.

Exhibit B-4. MDES Estimates for Continuous Outcomes from Survey Data

Intraclass Correlation

Survey Data Outcomes

(20 students/project)*

.05

0.12 standard deviations

.10

0.14 standard deviations

*Estimates assuming an 80 percent survey response rate (16 students/project)

yield similar MDES estimates.



      1. Unusual Problems Requiring Specialized Sampling Procedures

Unusual problems that require specialized sampling procedures are not anticipated.

      1. Use of Periodic Data Collection Cycles to Reduce Burden

The data collection plan reflects sensitivity to issues of efficiency and respondent burden. The data will be collected at the fewest intervals allowable to maintaining data accuracy.

  • The student rosters will only be collected once from projects assigned to the treatment group and twice for projects assigned to the control group. The second collection of rosters for the control group is only to allow the intervention to be delivered to students in those projects and is not needed for the evaluation.

  • There will be one baseline and one follow up student survey, in order to provide timely results, minimize burden, and maximize precision in the impact estimates. Administrative records – rather than surveys – are being collected for longer-term outcomes.

  • The project director survey will be fielded once at a time appropriate for collecting data on the implementation of the intervention.

    1. Methods to Maximize Response Rates and Deal with Nonresponse

In order to obtain responses from the study sample, the study team has developed strategies to facilitate communication with respondents during data collection activities and to maximize response rates. These strategies have proven successful in the study team’s extensive experience conducting large-scale evaluation studies (e.g., The Reading First Impact Study, Evaluation of the U.S. Department of Education’s Student Mentoring Program, Evaluation of the Massachusetts Expanded Learning Time Initiative; The Enhanced Reading Opportunities Study).

The study team will follow several procedures to maximize response rates and handle nonresponse..

Student Rosters

Prior to the start of the 2014-2015 school year, the study team will send an introductory email to each site coordinator explaining the evaluation and the requests that will be made of site coordinators (see Appendix A). During the September 2014 meeting of the association of UB grantees (the Council on Opportunity in Education, or COE), the study team will provide an evaluation session for participating project directors that will clearly explain the importance of the study and the roles and responsibilities of project directors, including the process for submitting student rosters in the fall. Next, the study team will reach out to each project director via an email in the fall of the 2014-2015 school year to request student rosters (see Appendix A). Project directors will be asked to submit student rosters through a password protected secure file transfer portal (SFTP). One week after sending out the email request, a member of the study team will reach out to each project director via a telephone call during to answer any questions about the roster submission. Additionally, the study team will hold a webinar for to project directors to learn how to use the SFTP and why this is required method of roster submission. The submission of student rosters via a password protected SFTP will allow project directors to submit rosters at a time most convenient for them and will assure coordinators that student confidentially is protected as permitted by law.

Baseline Student Surveys

Cooperation with surveys has dropped in the last twenty years, and the study team expects that obtaining willing cooperation will be challenging for the student surveys. These respondents may not want to take the time necessary to complete the study surveys. Alternatively, they may be willing to participate, but only if the study team makes it easy for them to do so, or offers them something in return.

For the baseline survey, the study team will employ three strategies to increase response rates on the baseline student survey:

  • Encouragement from UB project staff. As a condition of participating in the demonstration, UB project directors must agree to encourage their students to complete the baseline survey, and, if feasible, will allocate time and space for students to complete the web-based baseline survey.

  • Provide for multiple modes for completion. Respondents will be able to completed the survey online, but if requested can complete the survey by phone.

  • Make use of reminders. The study team will make use of reminder phone calls as well as reminder emails and mailed reminders, as needed.

During Phase II – Collection of Follow-Up Survey, Project Director Survey, and Administrative Data of this study, the study team will face attrition from the study and missing data for some parts of the sample. Disadvantaged families and, therefore, high poverty schools tend to have high rates of student mobility and dropping out. To track the sample of students over time, the study team will use two strategies:

  1. Project director involvement. The study team will ask project directors to track the students as they move, change schools, or drop out of program, and to provide updated contact information for the study sample. Project directors are already required to track students that leave their UB projects, so this should not require any additional effort on the part of the project directors. Before the follow-up survey, the study team will ask project directors for any updated contact information for students who move residences or schools. This will help to ensure high response rates for the follow-up survey.

  2. Alternate contact information. The baseline student survey will request more detailed contact information than the study team would expect to obtain from student rosters. In particular, the study team will ask for cell phone numbers, email addresses, and/or the telephone number of a family member who could help the study team locate the sample member. This more detailed information will help to locate sample members for the second round of surveys.

To obtain willing cooperation from students and project directors to the evaluation’s data collection efforts, the study team will use employ similar strategies to those used for the baseline student survey collection.

In addition, during Phase II there will be an incentive payment to student survey participants. 7 An incentive payment is necessary for the follow-up student survey because this survey will collect key outcome data essential for the impact analysis. Incentives are appropriately used in Federal statistical surveys with respondents whose failure to participate would jeopardize the quality of the survey data (Graham, 2006). Given the importance of obtaining a high response rate to this survey, the study team intends to offer a $15.00 gift card as a financial incentive for students who complete the follow-up survey.

Second, the study team will use multiple modes to obtain completed surveys. The study team will offer students the option of completing surveys by web survey or telephone or paper.

Third, the study team will leverage contact with sample members to market the evaluation and prepare them for upcoming surveys. For example, the Abt study will:

  • Introduce the project director survey to project staff at the September 2015 COE meeting;

  • Use regular contact with project directors to obtain updated contact information for participating students to remind them to complete the upcoming project director survey; and

  • Ask project directors to include information about upcoming surveys in materials that they mail to students and parents regarding their UB projects.

    1. Test of Procedures and Methods to be Undertaken

The collection of student rosters was based on successful roster collection for other studies such as the Evaluation of Citizen Schools: School-Level Expanded Learning Time.

During the 60-day comment period for the Phase I request, the baseline student survey was pilot tested. The pilot tests ensured that the survey administration procedures are effective, the instructions are understandable, the content is comprehensible, and the length is reasonable. Nine 2013-2014 UB high school juniors (i.e. those in the cohort prior to the study cohort) from two UB projects participated in the pilot test. At the conclusion of the pilot survey administrations, student respondents were asked to indicate any difficulties encountered during the survey. Several questions were reworded to make them easier to understand and definitions of challenging words and phrases were added to the survey such as “Associate’s Degree” and “occupational training”. Student respondents did not express concern about the structure. The length of the survey was found to be longer than expected; several questions were simplified and ten questions were removed.

During the 60-day comment period for the Phase II request, the follow-up student survey and project director survey were pilot tested to ensure that the survey administration procedures are effective, the instructions are understandable, the content is comprehensible, and the length is reasonable. Eight UB high school seniors participated in the pilot test of the student follow-up survey. Respondents were asked to indicate any difficulties encountered during the survey. Several questions were reworded to make them easier to understand (e.g. changing “siblings” to “brother or sister”) and definitions of challenging words and phrases (e.g. “safety college”) were added to the survey. There were no concerns about the structure of the survey. The length was found to be longer than expected; four questions were removed.

Nine UB project directors participated in the pilot test of the project director survey to identify challenging questions or instructions. Several questions were reworded to make them easier to understand and definitions of challenging or unfamiliar words and phrases (e.g. “The Common Application”, “net cost”) were added to the survey. The length of the survey was found to be longer than expected; several questions were simplified and two questions were removed.

    1. Individuals Consulted on Statistical Aspects of the Design

The following individuals were consulted on the statistical aspects of the study:

Name

Title/Affiliation

Telephone

Mr. Cristofer Price

Principal Scientist, Abt Associates

301-634-1852

Dr. Alina Martinez

Principal Scientist, Abt Associates

617-349-2312

Dr. Rob Olsen

Principal Scientist, Abt Associates

301-634-1716

Ms. Amanda Parsad

Senior Associate, Abt Associates

301-634-1791

The following individuals will be responsible for the Data Collection and Analysis:

Name

Title/Affiliation

Telephone

Dr. Alina Martinez

Principal Scientist, Abt Associates

617-349-2312

Dr. Rob Olsen

Principal Scientist, Abt Associates

301-634-1716

Ms. Amanda Parsad

Senior Associate, Abt Associates

301-634-1791

Dr. Tamara Linkow

Associate, Abt Associates

617-520-2978

Ms. Linda Kuhn

President, Survey Research Management

303-247-0140

References

Avery, C. (2013). Evaluation of the College Possible Program: Results from a Randomized Controlled Trial. (Working Paper 19562). Retrieved from National Bureau of Economic Research: http://www.nber.org/papers/w19562

Burghardt, J., Deke, J., Kisker, E., Puma, M., & Schochet, P. (2009). Regional educational laboratory rigorous applied research studies: Frequently asked analysis questions. Institute of Education U.S. Department of Education. Princeton, NJ: Mathematics Policy Research.

Carrell, S. E. & Sacerdote, B. (2013). Late Interventions Matter Too: The Case of College Coaching in New Hampshire. Working Paper 19031). Retrieved from National Bureau of Economic Research: http://www.nber.org/papers/w19031.

Graham, J. D. (2006). Questions and Answers When Designing Surveys for Information Collections. Washington, D.C., Office of Management and Budget.

Hoxby, C. & Turner, S. (2013). Expanding College Opportunities for High-Achieving, Low Income Students. Stanford Institute for Economic Policy Research. SIEPR Discussion Paper No. 12-014

Roderick, M., Nagaoka, J., Coca, V., & Moeller, E. (2009). From High School to the Future: Making Hard Work Pay Off. Chicago: Consortium on Chicago School Research.

Schochet, P.Z. (2008). Statistical Power for Random Assignment Evaluations of Education Programs. Journal of Educational and Behavioral Statistics, 33(1), 62-87.

Schochet, P.Z. (2011). Statistical Power for Binary Outcomes for Clustered RCTs of Education Interventions. Presented at the Society for Research in Educational Effectiveness Spring Conference, March 3-6, 2011.

Seftor, N. S., Mamun, A. & Schirm, A. (2009). “The Impacts of Regular Upward Bound on Postsecondary Outcomes 7-9 Years After Scheduled High School Graduation.” Mathematica Policy Research, Inc. Available at: http://www.mathematica-mpr.com/publications/pdfs/upwardboundoutcomes.pdf

Sherwin, J. (2012). “Make Me a Match: Helping Low-Income and First-Generation Students Make Good College Choices.” MDRC Policy Brief, April 2012. Available at: http://www.mdrc.org/sites/default/files/policybrief_24.pdf

Smith, J. (2011). “Can Applying to More Colleges Increase Enrollment Rates?” The College Board. Available at: http://advocacy.collegeboard.org/sites/default/files/11b_4313_College%20App%20Research%20Brief_WEB_111026.pdf

U.S. Department of Education, Office of Postsecondary Education, Annual Performance Reports: Upward Bound Program Awards FY2012, available at http://www2.ed.gov/programs/trioupbound/ubgrantees2012.xls

1 U.S. Department of Education, Office of Postsecondary Education, Upward Bound Program Awards FY2014, available at http://www2.ed.gov/programs/trioupbound/ubgrantees2014.pdf

2 See http://www2.ed.gov/programs/trioupbound/ubgranteelevel-exp0910.pdf

4 The UB host institution will be one of five examples provided, so that students (and potentially their parents) can compare the net costs and performance of the host institution to other postsecondary institutions.

5 We will to conduct blocked random assignment of projects, using region and one or more other blocking factors that are associated with the key student outcomes in this study.

6 Since each Upward Bound project serves several target schools, it is reasonable to ask whether it might be more appropriate to estimate a three-level model of schools nested with projects and students nested within schools.  However, Schochet (2008) shows that in this context, it is only necessary to capture variation across schools if the study selects a sample of target schools from each project.  Because the study sample will include all target schools for each randomized project, adding a school-level to the model is unnecessary.

7 We believe we will be able to achieve high rates of response from project directors because they are required to participate in data collection for a national evaluation under ED’s Education Department General Administrative Regulations (EDGAR) and in the demonstration’s data collection as a condition of participating in the grant.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAnne Wolf
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy