ED Response to OMB Passback

SSOCS 2016 & 2018 - Response to OMB Passback.doc

School Survey on Crime and Safety (SSOCS) 2016 and 2018

ED Response to OMB Passback

OMB: 1850-0761

Document [doc]
Download: doc | pdf

MEMORANDUM OMB # 1850-0761 v.7


DATE: August 5, 2015


TO: Shelly Martinez

Office of Information and Regulatory Affairs, Office of Management and Budget


FROM: Rachel Hansen

National Center for Education Statistics

THROUGH: Kashka Kubzdela

National Center for Education Statistics

SUBJECT: Response to OMB passback on School Survey on Crime and Safety (SSOCS) 2016 and 2018



1. We appreciate NCES’s recognition of Q23 as perception based but the question’s apparent premise that most schools are not being successful at crime prevention, and therefore should identify the one or more limitations to explain their lack of success, seems inherently problematic. Was it asked exactly this way in previous administrations and was its placement prior to more objective questions the same? Did NCES consider framing more neutrally such as “Based on your experience, please rate the importance of the following factors in reducing or preventing crime at your school.”


Q23 has been on the survey since the infancy of SSOCS, dating back to 1999-2000 collection. The main concern is that rewording this item would break the trend line. Additionally, a change of this nature would require cognitive testing to ensure accurate and reliable responses. Similarly, the placement of this item has essentially been the same throughout prior administrations. While new items have been added before this item, Q23 has always been asked before the Frequency of Crime and Violence at School and Number of Incidents sections. Additionally, we agree that it is important to know which factors are rated as important in crime prevention; however, the purpose of this item is to investigate which factors are potential obstacles in preventing school crime, thus informing administrators, policy-makers, and researchers of what factors need to be addressed.


2. Question 21, which is a new question on mental health services, seems to follow this model as well. Was this intentional?


Q21 was intentionally formatted in the same manner as Q23. Historically, Q23 has used this format with no known validity issues. Due to this, we believe this format is an adequate way to ask this perception question on limitations. Furthermore, we have cognitively tested Q21 before adding it to the survey. Any change to this item would require additional cognitive testing.


3. SS A, page 5 mentions overlap with CRDC. Please say more about how the content in SSOCS and CRDC have been rationalized to be as complementary as possible.


We have expanded the sentence in Part A to: “There is some overlap in topical areas between SSOCS and the CRDC, specifically the counts of incidents reported. Due to the reorganization of the SSOCS sponsoring agency (the Office of Safe and Drug-Free Schools) and funding issues, SSOCS has not been administered since 2010. In order to continue to have an understanding of the incidence of crime and prevalence of disciplinary actions in schools, as well as to meet the needs of Congress and researchers in the field, it was imperative for these data to continue to be collected. NCES requested for the Office of Civil Rights (OCR) to add certain items to the CRDC that had previously been included in SSOCS, specifically the number of incidents that occur at schools, as well as disciplinary actions. Therefore, with the reintroduction of SSOCS for the 2015-16 school year, there is now some overlap with CRDC. However, given that the data collections have two different respondents, it is uncertain whether the responses received on similar items will be comparable. NCES and OCR are interested in investigating the comparability of similar items as a check on the reliability and validity. If items are found to be comparable, some items could potentially be removed from either SSOCS or the CRDC in future data collections. However, it is necessary to have at least one iteration of overlap in order to analyze whether the items are comparable.”


4. SS A, page 10 says that the average burden is assumed to be 11.5 second per item. At least four of the new items require writing in counts, presumably from a records check. How well does the available evidence inform the burden estimate for such questions?


The only new item to the survey that requires writing in counts is Q27: Number of arrests. The average burden per item (11.5 seconds) was calculated based on principals’ responses to a question on the 2006 survey about how much time it took to complete the survey (45 minutes). The 2006 questionnaire contained at least 60 items which required writing in counts, so presumably the answers that principals provided on amount of time included the time it took to check records; thus, the 11.5 second per item estimate does incorporate the time needed to consult records for some of the items. We then added an additional 7 minutes (11.5 * 38 new subitems) to the estimated response burden to account for new items, resulting in 52 minutes for the average burden.

File Typeapplication/msword
Authorjoconnell
Last Modified ByKubzdela,Kashka
File Modified2015-08-05
File Created2015-08-05

© 2024 OMB.report | Privacy Policy