OMB-ED Q&A May 2009

ED-OMB QandA Summer Reading May 2009.doc

Summer Reading Program Study

OMB-ED Q&A May 2009

OMB: 1850-0864

Document [doc]
Download: doc | pdf

OMB-ED Q&A on 200902-1850-001, Summer Reading Program Study

May 2009


Question 1. The monitoring postcards are confounded with the treatment of receiving the books. How does IES plan to address this? We would welcome some discussion about this issue.

Response: The intervention in this study consists of receiving 8 books (matched to the student for content and difficulty), as well as receiving weekly monitoring postcards which serve to remind the student of the books. This is the primary purpose of the monitoring postcards.

The information on the postcards themselves will provide some information about fidelity of implementation for the treatment group, but will only be used descriptively and not for analyzing treatment vs. control impacts for the main research question.

Question 2. Pretesting. Part B notes that the student interest survey and the postcards will be pretested. Did this already occur? What were the results? We are unsure about whether 3rd graders necessarily know what a biography is, for example.

Response: The student interest survey and postcards were piloted with eight 3rd graders. The students were shown both the survey and postcard, and were asked to read them, and fill them out. Students were able to follow the directions and complete both instruments with no problems. Students were then interviewed regarding the readability and understanding of both instruments, and it was determined that the instruments are appropriate for 3rd graders.

Question 3. Final postcard. How does IES plan to use responses to the questions about newspapers, magazines, and children's books (4,5,6)?

Response: The final postcard is also only intended to provide descriptive information about the treatment group. This data will not be used for analyzing treatment vs. control impacts for the main research question.

Question 4. Monitoring postcard. How will IES use the responses to question 4 (how interesting was this book?)?

Response: Monitoring postcard data will be used only to provide descriptive information about the treatment group. This data will not be used for analyzing treatment vs. control impacts for the main research question.

Question 5. How did this survey come about? Was it in response to a needs assessment? Please provide some of the history for how the REL decided to produce this study.

Response: Results from REL Southwest’s large-scale quantitative needs assessment survey conducted in 2007-2008 identified research related to reading as a priority among all 5 states and all constituents surveyed. REL Southwest, through outreach and dissemination work, received requests from several districts in Texas regarding their interest in evaluating and examining summer reading programs. Subsequent discussions with the REL Governing Board and other stakeholders led to the development of this project concept.


Question 6. Please explain what is expected for response rates.

Response: The expected response rate for postcards is 60%. The response rate for the outcome measure (SRI) is expected to be considerably higher (>80%) since the test is given in school.

Question 7. Please explain how IES plans to address non-response bias.

Response: For our main research question, analyzing treatment vs. control impacts, non-response bias is not likely to be a concern. Attrition is expected to affect treatment and control groups equally (on average) in this “intent to treat” analysis. We will be able to examine whether significant differential attrition or response bias occurred.

For the postcards, non-response bias could exist. We will have postcards for some treatment students, whereas others will not return them, and the respondents may be different than the non-respondents in some way. If we wanted to draw valid inferences about the population of all treatment students, we would need to apply non-response weights. However, given that this data is only being used in a supplementary and purely descriptive manner, we think it is sufficient to simply describe those students who did return the postcards and explain the limitations in this approach.

Should we need to apply non-response weights, we would do so using auxiliary information available on all students (respondents and non-respondents). This would include school variables, as well as matching the student zip code to Census data on median income and percent urban. Logistic regression would then be used to model response propensity scores. We would then develop adjustment cell propensities and use these to create the non-response weights.

Question 8. How reliable is the third grader self reporting likely to be? Does IES plan to have any other ways to verify what occurred over the summer?

Response: We do not have specific psychometric information about the postcards, such as reliability coefficients. The pilot testing did not reveal any problems with student reporting. Given the varied nature of the items and the fact that they are self-report measures for third graders, we expect a range of reliabilities from fairly high to relatively low. This data is supplementary descriptive information and will not be used in any way for analyzing treatment vs. control impacts for the main research question.

IES does not have other plans to verify what occurred over the summer.

Question 9. Part A, Page 4. Please include some citations for the statements about summer reading loss being particularly prominent among economically disadvantaged students.

Response: Here are the full references for the two studies that are cited in Part A, Page 4:

Alexander, K. L., Entwisle, D. R., and Olson, L. S. (2007). Lasting consequences of the summer learning gap. American Sociological Review, 72, 167-180.

Cooper, H., Nye, B., Charlton, K., Lindsay, J., and Greathouse, S. (1996). The effects of summer vacation on achievement test scores: a narrative and meta-analytic review. Review of Educational Research, 66, 227-268.

Here are three additional references related to this topic:

Fairchild, R. McLaughlin, B. & Brady, J. (2006). “Making the Most of Summer: A

Handbook on Effective Summer Programming and Thematic Learning.” Baltimore, MD: Center for Summer Learning.

Kim, J.S. (2006). Effects of a voluntary summer reading intervention on reading achievement: Results from a randomized trial. Educational Evaluation and Policy Analysis, 28. 335-355.

Kim, J.S. (2007). The effects of a voluntary summer reading intervention on reading activities and reading achievement. Journal of Educational Psychology, 99, 505-515.

File Typeapplication/msword
File TitleOMB-ED Q&A on 200902-1850-001, Summer Reading Program Study
AuthorBridget Dooling
Last Modified ByBridget Dooling
File Modified2009-05-21
File Created2009-05-21

© 2024 OMB.report | Privacy Policy