10-5-07 memo

OMB memo 10_5_07 REL Southwest FINAL.doc

Assessing the Impact of Collaborative Strategic Reading on Fifth Graders Comprehension and Vocabulary Skills

10-5-07 memo

OMB: 1850-0839

Document [doc]
Download: doc | pdf

To: Ray Valdivieso and Amy Feldman

From: REL Southwest: Gay Lamey


Date: 10/5/07

Re: REL Southwest Response to feedback on OMB package: Collaborative Strategic Reading.

Please see REL Southwest’s response to questions on the Collaborative Strategic Reading OMB package in “Red” font below.


REL Southwest will also revise and resubmit OMB documents that were edited based on OMB’s feedback, in both track changes and clean documents and submit to IES for review and posting.


  1. The study schedule assumed a start date in summer of 2007.  Does this mean that everything is pushed back one full year?


The full study will begin next year once we obtain OMB approval. We are going to conduct a pilot study this year with nine teachers.

  1. Is Exhibit 2 in Part B a description of the criteria that comprise the 3 "tiers?"  What procedures will the REL use "add to the subsample" tier 2 schools?   Does the REL expect to have to dip into Tier 3 for its recruiting?


Yes, Exhibit 2 in Part B is a description of the 3 tiers (in order from left to right: Tier 1/Tier 2/Tier 3. This will be modified to include column headings to clarify Exhibit 2.

As the subsample needs to be enlarged (due to potential sites not joining the study), a corresponding number of Tier 2 sites will be added to the recruiting sample (based on how well they approach the Tier 1 criteria). This is an iterative process that will continue as needed until our recruiting goals are met.

Based on the preliminary data from MDR, we do not anticipate that it will be necessary to use Tier 3 schools. However, we wanted to develop a full recruiting plan that allowed for all eventualities, hence our inclusion of Tier 3 in the plan.

  1. Why aren't state assessments part of this evaluation?


There are a number of reasons we decided not to include state assessments, mostly based on psychometric concerns. In our judgment, state assessments do not sufficiently measure the constructs of interest in this evaluation. Therefore, we are proposing to use the GRADE which focuses on reading comprehension, listening comprehension and vocabulary, which are variables most closely, aligned with the reading skills the intervention should impact. The psychometric properties of the GRADE are stronger than the state assessments and thus, will bemore effective at documenting changes in reading performance, particularly among high achievers.


There are also logistic problems with using state assessments. We may potentially operate in more than one state and comparing scores on different state assessments would require strong (and most likely untenable) assumptions about the equivalence of such scores. We would also be subject to state timelines for releasing data and this may not match our study schedule.


  1. Why aren't the burdens associated with the student pretests and posttests included in the burden estimates? Is GRADE the instrument being used for these pre and post tests? Why did ED choose this assessment instrument?


The GRADE will be used to obtain pre and posttest scores on the 5th grade students in this study, but study staff will handle all elements of test administration. Teachers will not be asked to help administer any portion of the measure and therefore there is no burden on the teachers associated with administration of the GRADE. Reasons for the selection of the GRADE are contained in our response to question 3.



  1. What is the specific meaning of "data will be stored in accordance with the Privacy Act of 1974" and why is this cited when the data are being protected under IES's more stringent confidentiality statute?


Thank you for pointing this out to us. This was an oversight on our part that we had left that language in Part B over the course of developing our package. We have removed the reference to the Privacy Act of 1974 from Part B, and will of course protect the data under the IES confidentiality statute as contained in Part A.

  1. Why does the REL consider it analytically valid to group former ELLs with current ELLs "if there are too few ELL students in the sample recruited?"


The primary focus of this study is on reading comprehension for all 5th graders, regardless of language status. We are however, interested in some secondary, exploratory analyses, and this includes looking at students with different levels of English proficiency. The composition of the sample actually collected will impact which subgroup analyses can be performed. One possibility is to compare native speakers with a group consisting of former ELLs combined with current ELLs.


We are confident that this combined group “former/current ELLs” is both valid and important to examine. We have polled experts on ELL research when doing related work on the What Works Clearinghouse review of interventions for elementary grade ELLs and have monitored national trends. Both of these indicate that language proficiency issues still exist for students classified as ELLs within the past three years. Although the former/current ELL distinction is often made by elementary schools for planning purposes, it is likely that this represents a false dichotomy. These proposed analyses for language status subgroups are only exploratory, but should yield useful insights about differential effects of the CSR treatment.


  1. Please cite the IES confidentiality statute on all instruments rather than just promising confidentiality.


We will use the following confidentiality language on all instruments, letters and brochures, (study materials):


Responses to this data collection will be used only for statistical purposes. The reports prepared for this study will summarize findings across the sample and will not associate responses with a specific district or individual. We will not provide information that identifies you or your district to anyone outside the study team, except as required by law.”


In part A, section 10, we cite the following confidentiality language:


REL Southwest follows the confidentiality and data protection requirements of IES (The Education Sciences Reform Act of 2002, Title I, Part E, Section 183). REL Southwest will protect the confidentiality of all information collected for the study and will use it for research purposes only. No information that identifies any study participant will be released. Information from participating institutions and respondents will be presented at aggregate levels in reports. Information on respondents will be linked to their institution but not to any individually identifiable information. No individually identifiable information will be maintained by the study team. All institution-level identifiable information will be kept in secured locations and identifiers will be destroyed as soon as they are no longer required REL Southwest obtains signed NCEE Affidavits of Nondisclosure from all employees, subcontractors, and consultants that may have access to this data and submits them to our NCEE COR.”



  1. Question 10 on the Fall Teacher survey does not comply with OMB's classification standards.  Please change it to comply by eliminating the third category and replacing with the following two: "Asian" and "Native Hawaiian or Other Pacific Islander."


Thank you for bringing this error to our attention. We will make these changes to the Fall Teacher Survey.


  1. Incentives: please clarify what the control and treatment schools will be receiving as incentives and whether this study is requesting an exemption to the incentives policy. Please also clarify whether teachers will be compensated additionally for any activities associated with this study that take place outside the school day. Please note that incentives are not meant to "compensate" teachers for their time.


Because the REL contract expires March 2011, it will not be possible to provide control teachers with the CSR professional development at the end of the study. Therefore, in accordance with NCEE guidelines, only control teachers (who have been randomly assigned to that condition) will be paid $30 for filling out the teacher surveys. No other incentives will be provided.

  1. please explain how ED arrived at the response rate estimates (e.g. 100%, 90%) and how ED will analyze/adjust for non-response bias.


The 100% response rate refers to the extant data request to participating school districts regarding student background characteristics. We expect all participating school districts to approve our data request and provide the requested data. Most of the data being requested from the school districts (sex, race, free/reduced lunch status, ELL status, special education status, state assessment score) should have very little missing data due to NCLB reporting requirements. To the degree that there is missing data on these or other questions in the requested datasets, we will use appropriate statistical methods as described below.


The 90 % response rate refers to both the teacher survey and school information sheet data collections. In contrast to a survey of the general population (where the response rates are typically 50-60%), the REL Southwest field staff will maintain close contact, often face-to-face, with the small number of participating schools. As a result, commitment and motivation will be high, and follow-up with respondents more direct and straightforward. Therefore, a response rate of 90% is not an unreasonable expectation.


The school-level information will be emailed to principals with a follow-up phone call. The REL Southwest field staff will have multiple opportunities to meet with the principals at the beginning of the study (during professional development and baseline student testing) which will also provide the option of a face-to-face interview to collect the information.


The Fall Teacher Survey will be administered to treatment teachers during the CSR training. If a treatment teacher is not present at training, we will ask the teacher to fill out the survey during baseline student testing, when study staff is visiting the schools. The control group teachers will be administered the fall survey during an informational session regarding the study or during the baseline student testing. If a teacher has missed the face-to-face administration of the Fall Survey, we will mail the survey to the school address and follow-up with phone call/interview.


The Spring Teacher Survey will be administered by study staff visiting schools to conduct classroom observations. In cases where a teacher is not present when a member of the study staff at the school, the surveys will be administered via mail, with follow-up phone calls/phone interviews whenever necessary.


For all data collected, unit and item non-response will be dealt with appropriately in accordance with the OMB Standards and Guidelines for Statistical Surveys (September 2006). In particular, as per Guidelines 1.3.4 and 3.2.9, if the overall unit non-response rate is less 80% we will conduct a non-response bias analysis, and apply appropriate non-response weighting if necessary. However, for the reasons enumerated above, we do not expect that unit non-response rate will fall below the OMB threshold.

For item non-response, we will evaluate the missing data mechanism (e.g., MAR, MCAR, NMAR) and we will use multiple imputation techniques as appropriate.



  1. Previous IES studies have demonstrated impacts that are significant but small. Are there plans to use the data to see which schools/teachers attained particularly good outcomes and particularly poor outcomes, and then follow up with qualitative data techniques to figure out what the performing schools were doing differently to attain the good outcomes and what the non-performing schools were doing to attain the poor outcomes?


Our main experimental impact analysis treats schools as random effects, making it possible to estimate end explore school level differences in the outcome results (i.e., whether significant variance exists between schools). We will be observing both treatment and control classrooms using the same instrument, which will provide us descriptive data that can be used to describe instructional practices and classrooms in particularly high- as well as non-performing classrooms and schools. These observational data together with information from teacher surveys (teacher experience, level of education, extra aids and resources in the classrooms, other professional development teachers received) will provide context for interpreting the impact analysis results and will aid us to explore reasons for particularly low- and high-performing schools and classrooms.


In addition, we are planning to conduct additional correlational (non-experimental) analyses to explore whether the level of implementation of the CSR program (fidelity) is related to student outcomes (including only treatment group teachers). This analysis is described in OMB Clearance Request Part B, p.15.



File Typeapplication/msword
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy