LIEP OMB comments and responses

LIEP OMB passback questions on Language Instruction Educational Programs.doc

Language Instruction Educational Programs (LIEPs): Lessons from the Research and Profiles of Promising Programs

LIEP OMB comments and responses

OMB: 1875-0259

Document [doc]
Download: doc | pdf

 

ICR #

Agency

Title

Type

201103-1875-001

ED/OPEPD

Language Instruction Educational Programs (LIEPs): Lessons from the Research and Profiles of Promising Programs

ICR New

 

 

 OMB passback questions on Language Instruction Educational Programs (LIEPs): Lessons from the Research and Profiles of Promising Programs


         The selection criteria for the programs limit the case studies to “promising” programs.  But without looking at “non-promising” programs, how will they be able to differentiate between which characteristics make a program “promising” and which ones do not?  We recommend that some of the case studies be allocated to “non-promising” programs so there is some ability to assess which characteristics might lead to a program being “promising.”  Because of the small, unrepresentative sample, any inference from these differences will necessarily be tentative, but it seems like a wasted opportunity to not even try to assess what characteristics might be associated with “promising” rather than “non-promising” programs.  Perhaps “promising” programs have much more supportive principals; without case studies of the “non-promising” programs there is no way to learn that.  Reading Section A it appears that the universe of case study programs has already been limited to 37 “promising” programs.  My suggestion would be to re-open the case studies to at least some (maybe 6-10) “non-promising” programs, even though that would result in fewer case studies of “promising” programs.  These “non-promising” programs should still satisfy the criteria 2 and 3 described in B1.  There is little reason to do case studies of non-mature programs or non-Title III schools.


The project is using case studies to closely examine a variety of programs to provide a snapshot of different approaches. This project is not experimental or quasi-experimental in nature, thus, there is no intent to infer or generalize findings, but to document how the selected sites design, select, implement and evaluate their LIEPs. The underlying methodology is descriptive and qualitative rather than evaluative. The existing literature discusses major program features that are essential to the success of LIEPs. What is less clear are details about how these features are implemented by the districts and schools on the ground. This project is focused on adding more clarity and detail about implementation rather than comparing promising to non-promising programs. The project team, under the guidance of the expert panel, has searched the literature and used findings from extant literature to understand the major characteristics of promising LIEPs. In addition to what has been learned from the literature, the sites selected also show progress in meeting the goals of Title III - English language proficiency and academic content proficiency - based upon their AMAO and AYP data. Lastly, the State Title III Directors recommended these sites, based on their high level of expertise in the efficacy of programs and practices in their states and their knowledge about programs that are struggling to meet state standards. We worked closely with the ED Title III Program Office to secure nominations of those programs that State Title III Directors recognized as among the best in their states. All sites selected will be mature programs that receive Title III funding.


         Can you describe the methodology for the literature review?  How will you identify practices/programs/strategies that are the most promising or are effective?  Will you apply any standards of evidence of effectiveness, similar to what is used by the What Works Clearinghouse?


The reviewer team used an initial and final inclusion protocol. To gather literature, reviewers conducted a variety of keyword searches on major academic databases such as JSTOR, PsycInfo, EBSCO, and ProQuest. For initial inclusion, selection criteria focused on articles’ authorship, publication date, and publication vehicle: articles needed to have been published in the last 20 years1 (1990 or later), either in peer reviewed journals, as publications of major research organizations (e.g., the American Institutes of Research, AIR), as outputs of federally funded research centers (e.g., CREDE, CRESST, CRESPAR, CREATE, etc.), or as publications of the ED’s research arm, the Institute for Education Sciences (IES) and its associated Regional Educational Laboratories (RELs). Department personnel, project contractors, and some expert panelists also submitted additional texts for the initial review list. By following the protocols, reviewers generated an initial list of 162 texts as candidates for inclusion in the literature review. Four reviewers then read through these to determine which would be included in the final discussion. As they read, all reviewers used a database entry form to record information about a variety of topics including the literature review category(s) to which the article pertained, the approach(es) it referenced, any specific program models it referenced, the grade-level(s) to which it applied, and a summary of key findings or assertions. Based on the fact that this review included a variety of different types of literature, the exact conditions by which an article was ultimately excluded or included varied. Based on varied factors, reviewers were required to provide their rationale for including or excluding each article from the final list included in the review document, and applied the literature-specific protocols to each article they reviewed. Ultimately, 144 documents were deemed appropriate for inclusion in this review. It should be noted that 18 of these 144 documents were quasi-experimental studies and seven were experiments.


We relied on the existing literature and the input of our expert panelists to identify practices/programs/ strategies deemed most promising or effective. This area of research is quite limited. As such, it was determined early-on that the standards of effectiveness applied by the WWC would not be applicable to this study given the paucity of existing research that meets WWC criteria. The literature on LIEPs and EL is primarily qualitative. The WWC standard would limit literature returned to be either experimental or quasi-experimental. There simply are not enough experimental and even quasi-experimental studies on this topic to sustain a comprehensive discussion about all of the review topics that the reviewers and experts deemed important. This is not unique to the study of LIEPs. It is, however, particularly acute in this area. Additionally, the goal of the overall project is not to rank or compare different program models according to outcomes or efficacy, but rather - in the case of the literature review – to provide a summary of available literature as it pertains to program theory, design and implementation. The team believes that focusing solely on program efficacy using such standards would not accurately portray the field, since many models lack any experimental research evidence.


         Sites will be chosen based in part on whether they have a system in place for evaluating program success.  Furthermore, RQ3 asks how states and districts can evaluate the effectiveness of these programs.  What is meant by “evaluation” in both of these contexts?  Will any standards of rigor be applied to judge the quality of evaluation when selecting sites?  If so, what are they?


In both contexts, evaluation refers to policies and practices for assessing the fidelity of LIEP program implementation as well as measuring student-related progress and outcomes. We chose sites that had a system in place for evaluating program success but we didn’t attempt to make judgments about these systems when we chose sites. During the visits we plan to ask district coordinators (questions 16-18), school principals (questions 15 and 16), State Title III coordinator (questions 4, 10, and 11), and teachers (question 9) about their evaluation system.

         Can you clarify RQ1? What is meant by “research-based components of established LIEPs”?  Will this question get at which LIEPs or particular practices/strategies are effective based on the literature review?


This question addresses the first component of this project – the literature review. The project team searched the literature for the components of LIEPs that show promise for yielding academic and other positive outcomes for ELs. The resulting literature review has documented practices and strategies that support effectiveness.


         Since the sample covers only 10 states, is there any concern that the sample will not be representative enough, given that every state has different standards and assessments and perhaps needs regarding English Learners?  Will the Title III director from each state be solicited for a nomination or just a subset of states?


The current sample provides sufficient range to represent the various types of LIEPs (English only, dual language, and new-comer programs), the geographic regions (NE, South, Midwest, West, Northwest, including representation from states with significant EL populations as well as and states with emerging EL populations as aligned with the strategic plan of the OELA Assistant Secretary), urbanicities (rural, urban, suburban), various grade levels, and the ethnicities and languages of ELs served. Title III Directors from each state were asked to provide nominations of promising LIEPs throughout their states. Title III Directors from nine states responded. Suggested sites in California were recommended by some of the project’s expert panelists. Based on recent comments from the assistant secretary of Office of English Language Acquisition we are going to broaden the number of states we include, with a focus on large EL population states and emerging EL population states.

         Are you going to look into spending data or cost metrics such as cost per EL student at the school or district level?


We were not planning to collect this data. It seems quite difficult to get the school- or district-level cost data disaggregated by EL student. However, we could ask districts and schools if they have such cost information readily available and when available, we could analyze it as part of this study.



1 Although there have been significant changes in the political and philosophical landscapes regarding EL education since the implementation of NCLB in 2002, reviewers determined that limiting this review to only NCLB-era articles would ultimately provide an incomplete representation of the available research.

4



File Typeapplication/msword
Authorrhonm
Last Modified Bykatrina.ingalls
File Modified2011-06-22
File Created2011-06-22

© 2024 OMB.report | Privacy Policy