1850-NEW Text Ed rev SS Part B

1850-NEW Text Ed rev SS Part B.docx

Study of An Information Strategy to Increase Enrollment in Postsecondary Education

OMB: 1850-0939

Document [docx]
Download: docx | pdf


Text Ed: A Study of Text Messaging to Improve College Enrollment Rates Among Disadvantaged Adults
OMB Data Collection Package:
Supporting Statements B



June 2017







Prepared for:

Institute of Education Sciences

United States Department of Education
Project Officer: Melanie Ali

Contract Number: ED-IES-16-C-0016



Prepared By:

MDRC

16 East 34th Street, 19th Floor

New York, NY 10016

Alex Mayer, Project Director

[email protected]

(212) 340-4476



INTRODUCTION


This Office of Management and Budget (OMB) package is a new clearance request for data collection activities to support “Text Ed: A Study of Text Messaging to Improve College Enrollment Rates Among Disadvantaged Adults” (referred to as the Text Ed Study). The study provides a unique opportunity to rigorously evaluate whether the use of a promising, low cost messaging strategy can help Education Opportunity Center (EOC) grantees meet the program’s goal of increasing college enrollment among disadvantaged adults. EOCs — representing one of the eight U.S. Department of Education TRIO programs and hosted primarily by postsecondary institutions —provide a variety of informational services related to the college-going process and financial aid options.


The Institute of Education Sciences (IES) at the U.S. Department of Education (ED) has contracted with MDRC and Dr. Lindsay Page at the University of Pittsburgh to conduct this evaluation. The study team will recruit up to 20 EOCs to participate in the study. Eligible clients within each EOC study site will be randomly assigned to receive either the services typically provided by the grantee (control), or the text messaging enhancement in addition to the grantee’s typical services (treatment). Random assignment will occur on a rolling basis from spring 2018 through spring 2020 and will include approximately 6,000 EOC clients. The study will rely primarily on administrative data. Among the administrative data obtained for the study will be EOC intake records, which include background data for individuals who contact the EOC for services, are study-eligible, and consent to participate.




PART B. SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSION

B. Collection of Information Requiring Statistical Methods

    1. Respondent universe and sampling methods


Drawing from a national population of approximately 150 Education Opportunity Centers (EOCs), the study team aims to recruit a sample of about 20 EOC grantees and approximately 6,000 clients. The goal is to have a sample of EOCs and clients that reflect the diversity of EOCs nationally across two key dimensions – geographic region and host institution type -- such that the intervention effect estimates from the evaluation will be informative to EOCs from different parts of the country in different types of host institutions with diverse client populations.


Eligibility criteria for EOCs to participate in the study include: EOCs must not already have a substantively similar intervention in place (e.g., staff regularly and systematically communicate with clients about the college-going process via text messages), which would diminish the service contrast with the control condition; EOCs must demonstrate that they have the capacity to implement the intervention with fidelity (e.g., staff must have the ability and willingness to communicate with clients via text message); and EOCs must be willing to participate in an individual-level randomized experiment.


Table B.1 shows the EOC characteristics on the two key dimensions identified above for the 2016 grant year. To recruit a similar population, our goal for the study sample of EOC grantees will be to recruit approximately 8 from 2-year colleges, 9 from 4-year colleges, and 3 from “other” (typically community-based organizations) EOC types (proportional representation). Similar targets would be set for the census regions.




Table B.1: Number of 2016 EOC Granteesᵇ by Census Regiona and Host Institution Type


 

Northeast Region

Midwest Region

South

Region

West

Region

Grand Total

2-year

4

13

31

9

57

4-Year

3

10

42

13

68

Other

8

4

11

3

26

Grand Total

15

27

84

25

151


a Collapsed Census Regions are used, in which West and Pacific are grouped in a single collapse region, and Puerto Rico is included in the South.

ᵇ The total of 151 EOCs reflects unique EOC grantees. Grantees that received concurrent grants were only counted once.


Site recruitment will begin by sending the full population of EOCs notification letters (Appendix F) and information packets (Appendix G) describing the study, the opportunity to participate, the benefits to grantees, and what is expected of participating grantees. EOCs will be invited to contact the study team to learn more about the study, including through informational webinars that the study team will host. The study team will then provide individualized follow-up to interested EOCs. The team will prioritize selecting a group of study EOCs that reflect the broader EOC population on the two key dimensions discussed above.


Client eligibility criteria: Eligible EOC clients will be those clients who (1) are age 18 and over; (2) go through the EOC’s intake process during the study’s rolling random assignment period; (3) have their high school diploma or equivalent at the time of intake and are not currently enrolled in a postsecondary institution; (4) intend to enroll in postsecondary education by the fall of 2020; and (5) are willing and able to communicate in English with their EOC via text message. Eligible clients who agree to participate in the study will be randomly assigned to receive the EOC’s typical services (control group) or to receive the EOC’s typical services plus the text messaging enhancement (treatment group).

  1. Procedures for the collection of information


Statistical methodology for stratification and sample selection


Major geographic regions and type of institution housing the EOCs will be taken into account when recruiting EOCs using purposive sampling as described above.


Estimation Procedure


Our main focus is to estimate the cross-EOC mean effect ( ) of the information strategy on FAFSA completion and college enrollment. To estimate for this multisite trial, we will use the following two-level model discussed by Raudenbush (2015) with fixed EOC-specific intercepts ( ) for each site, random EOC-specific impact coefficients ( ), and fixed coefficients ( ) for individual-level baseline covariates ( ):


Level One: Individuals

where: (1)


Level Two: EOCs

(2)

where (3)


Here, is the observed value of the outcome for individual from EOC ; is 1 if individual from EOC was randomized to treatment and 0 otherwise; is the value of baseline covariate for individual from EOC ; is the coefficient for covariate ; are around 20 fixed EOC-specific intercepts; is the population mean treatment effect for EOC ; is the cross-site mean treatment effect for the population of EOCs; is a random error that varies independently and identically across individuals within EOC and experimental conditions, with a mean of zero and a variance of ; and is a random error that varies independently and identically across EOCs in the population with a mean of 0 and a variance of ( ).


This model allows to be estimated. is the standard deviation of the EOC-level treatment effect distribution, and , quantifies how much the effectiveness of the messaging strategy varies across EOCs, information relevant to the generalizability of findings. If is large, then the effectiveness of the information strategy varies a lot across EOCs. This would imply that, while provides a useful summary of the overall average effectiveness of the information strategy, it masks that the information strategy is more/less effective at particular EOCs. In contrast, if is small, the effectiveness of the information strategy is fairly consistent across EOCs.


Degree of accuracy needed for the purpose described in the justification


Statistical Precision of the Design: Raudenbush and Liu (2000) demonstrate how to determine the statistical power of impact estimates from multisite trials, where individuals are randomized within sites (e.g., EOC grantees). Dong and Maynard (2013) and Spybrook and Bloom (under review) describe how to assess the statistical precision of such estimates in terms of a minimum detectable effect (MDE). An MDE is the smallest true mean effect that a study design can detect at a specified level of statistical significance with a specified level of statistical power (typically 80 percent). Spybrook and Bloom (under review) derive the following formula, associated with the above estimation model, for calculating the MDE:1


(1)


where is the number of EOCs, is the cross-site standard deviation of EOC-level treatment effect distribution, is the intraclass correlation (the proportion of the total variance of the outcome that is between EOCs, which the fully explain), is the proportion of within-EOC outcome variation that is explained by baseline covariates, is the total variance of the outcome for the control group (within and between EOCs), is the number of sample members per EOC, is the proportion of individuals randomly assigned to treatment, and is a multiplier that, for a two-tailed hypothesis test at the 0.05 significance level with 80 percent power, rapidly approaches 2.8 as increases.


We present MDE in percentage points rather than standardized effect size units because the main outcomes of interest are easily interpretable in their natural units (e.g., enrollment rates). Several design parameters required for MDE calculation have limited empirical basis in higher education (e.g., , , and ); thus we make informed assumptions or, to be prudent, assumptions that upward bias the MDEs. We set: =.00625 based on recent empirical and theoretical work by Weiss et al. (2016);2 =0.05 based on related estimates for behavioral outcomes in K-12 (Jacob et al 2009, Bloom et al, 2008);3 =0 and =0.25, both reflecting a worst case scenario for statistical precision;4 finally, we assume =.50.5


The design includes approximately 300 clients per EOC and 20 EOCs. Currently, all EOCs report 1,000 clients minimum per year, and many report far more, so a goal of averaging 300 clients for 20 EOCs over the course of the study period is reasonable. These assumptions yield an MDE of 4.1 percentage points with 20 EOCs. In other words: if the average EOC’s true treatment effect is 4.1 percentage points, then there is an 80 percent chance of finding positive and statistically significant effects at the 5 percent level.6 This increases to 4.3 percentage points if only 15 EOCs join the evaluation, with an average of 400 clients in the study per EOC. Given empirical results from past messaging campaign interventions (Castleman & Page, 2015, 2016a, 2016b), this design and sample size are well positioned to detect meaningful effects.


Unusual problems requiring specialized sampling procedures


Not applicable.


Any use of periodic (less frequent than annual) data collection cycles to reduce burden


Most of the data for the study will be obtained from administrative records. Table B.2 shows the data sources that will be collected, the frequency of the collection, and analysis purpose of each source.


Table B.2: Data Sources, Frequency of Data Collection, and Analysis Purpose

Data Source

Frequency

Analysis Purpose

Interviews with EOC staff

Once prior to implementation of the intervention (Summer/Fall 2017)

  • Describe business as usual practices

  • Inform the customization of the intervention


EOC Client Baseline Information Form

Rolling or in batches during the study’s implementation period (February 2018 through April 2020)

  • Collect information needed to conduct random assignment

  • Obtain clients’ information in order to customize and send messages


EOC Program Intake Form

Rolling or in batches during the study’s implementation period (February 2018 through April 2020)

  • Collect information needed to obtain outcome data from FSA and NSC

  • Collect information needed to describe the study sample and conduct subgroup analysis

  • Collect alternate contact information to follow-up with clients if cell phone numbers become inactive

Biweekly client updates reported by EOC staff via the Text Ed technology platform

Biweekly during the study’s implementation period (February 2018 through Summer 2020)

  • Update clients’ cell phone number to ensure text messages are received

  • Update clients’ information to ensure appropriate customization of the intervention

Communication data from text messaging partner

April 2018, July 2018, October 2018, October 2019, and October 2020

  • Analyze text message communication data (i.e., number, direction, timing, and message content) to provide descriptive information about the intervention

Federal financial aid data from ED’s Federal Student Aid (FSA) office

December 2018 and December 2020

  • Analyze FAFSA completion outcomes

Enrollment data from National Student Clearinghouse (NSC)

December 2018, December 2019, and December 2020

  • Analyze client college enrollment outcomes


  1. Methods to maximize response rates and deal with issues of non-response


Background information on clients will be collected through the regular EOC intake process for all sample members, including the fields necessary to request and obtain FSA data and NSC data. Participating EOCs will administer the Baseline Information Form to clients who are eligible for and consent to participate in the study

We expect no missing data in outcome measures. Federal Student Aid (FSA) data include all students who complete their FAFSA, so lack of a record in FSA will be coded as not completing the FAFSA. National Student Clearinghouse (NSC) data cover about 98 percent of higher education in the United States, so lack of a record in NSC data will be coded as not enrolling in college. Similarly, the lack of a text message record from the service provider will be coded as no text communication occurring.

MDRC will use a combination of first name, last name, date of birth, and Social Security Number (SSN) to obtain match data for the research sample from NSC and FSA. This combination of identifiers yields a higher match rate than using name and birth date alone, and when combined with informed consent allows access to blocked NSC records.7 These data will thus maximize the study team’s access to the most reliable and complete NSC and FSA records.


We expect a 100% response rate on all data collections.

  1. Testing data collection procedures


Because data for the study consists of primarily administrative data from FSA, NSC, and the technology partner, procedures and methods for the collection of information do not need to be tested. In addition, most of the data being collected on the Baseline Information Form are data that are already typically collected (but not systematically) by EOCs and thus do not need to be tested.



  1. Individuals consulted on statistical aspects of design


This project is being conducted under contract to the Department of Education by MDRC. The data collection strategy and instruments were developed by the MDRC study team and Lindsay Page of the University of Pittsburgh. The individuals that were consulted on the statistical aspect of the design are:


Organization

Primary Contact

Phone Number

MDRC

Alex Mayer

212-340-4476

Insight Policy Research

Lashawn Richburg-Hayes

703-504-9480

MDRC

Mike Weiss

212-340-8651


Alex Mayer of MDRC will lead the estimation of the impacts for the Text Ed Study. Dan Cullinan of MDRC (212-340-7603) will lead outcome data collection and provide consultation on impact estimation.

References


Bloom, H. S., Zhu, P., Jacob, R., Raudenbush, S., Martinez, A., and Lin, F. (2008). Empirical issues in the design of group-randomized studies to measure the effects of interventions for children. MDRC Working Papers on Research Methodology. New York: MDRC.


Castleman, B. L. and Page, L. C. (2015). Summer nudging: Can personalized text messages and peer mentor outreach increase college going among low-income high school graduates? Journal of Economic Behavior and Organization, 115, 144 – 160.


Castleman, B. L. and Page, L. C. (2016a). “Freshman year financial aid nudges: An experiment to increase financial aid renewal and sophomore year persistence.” Journal of Human Resources 51(2).


Castleman, B. L. and Page, L. C. (2016b). “Parental Influences on Postsecondary Decision-Making: Evidence from a Text Messaging Experiment”. Available at SSRN: https://ssrn.com/abstract=2778080 or http://dx.doi.org/10.2139/ssrn.2778080


Dong, N., and Maynard, R. (2013). PowerUp!: A tool for calculating minimum detectable effect sizes and minimum required sample sizes for experimental and quasi-experimental design studies. Journal of Research on Educational Effectiveness, 6(1), 24-67.


Jacob, R., Zhu, P., and Bloom, H. S. (2009). New empirical evidence for the design of group randomized trials in education. MDRC Working Papers on Research Methodology. New York: MDRC.


Raudenbush, S. W., and Liu, X. (2000). Statistical power and optimal design for multisite randomized trials. Psychological Methods, 5(2), 199-213.


Raudenbush, S. W., (2015). Estimation of Means and Covariance Components in Multisite Randomized Trails. Learning About and from Variation in Program Impacts (MDRC). New York, New York.


Spybrook, J., and Bloom, H. S. (under review). Determining minimum detectable cross-site mean effect sizes, minimum detectable cross-site variation in effect sizes and minimum detectable effect size differences for categories of sites for multi-site trials.


Weiss, Bloom, Verbitsky-Savitz, Gupta, Vigil, Blake, and Armstrong (2016). How much do the program effects vary across sites? Presented at the Society for Research on Educational Effectiveness spring conference.

1 This formula assumes and are constant across sites and is approximately constant across sites and experimental conditions.

2 MDRC currently has an IES-funded research methods grant that focuses on filling the information void with respect to reasonable values of for designing multi-site trials like this one. The assumption that =.00625 implies =0.025 percentage points. This means that if the EOC-level distribution of average treatment effects is normal, then the average treatment effect at 95 percent of the EOCs will fall within a 9.8 percentage point range (±1.96*0.025 yields a 0.098, or 9.8 percentage point range).

3 Given the proposed estimation model, the ICC is negatively associated with the MDE because the EOC fixed effects explain all the cross site variation in average outcome levels. This is in contrast to cluster-randomized trials, where the ICC is positively associated with the MDE. Therefore, this small ICC is a prudent assumption.

4 For a binary outcome the variance is p(1-p), where p is the probability of success. Therefore, (and the MDE) are maximized when p=.50, yielding =0.25. Unless p is fairly extreme (e.g., p<.3 or p>.7), the MDE remains stable.

5 This assumption minimizes the MDE; however, as long as .30< <.70 the MDE remains stable.

6Values are for two-tailed significance=0.05, power=80 percent, =.00625, =0.05, =0, =0.25, and =.50.

7 Some students opt out of being tracked by the NSC. In order to obtain records for those students, the Informed Consent Form must explicitly state that a student’s SSN will be used to access his or her Clearinghouse records and state that the “student agrees to waive their rights under FERPA and allow access to academic records from all institutions they have attended.”

1


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJacklyn Willard
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy