Response to OMB Comments

NCES Response to OMB Questions on ECLS-K 11 Clearance Package_3_18_09.doc

Early Childhood Longitudinal Study Kindergarten Class of 2010-2011

Response to OMB Comments

OMB: 1850-0750

Document [doc]
Download: doc | pdf



March 13, 2009


MEMORANDUM


To: Shelly Martinez, OMB


From: Gail Mulligan, ECLS-K: 11 Project Officer


Subject: Responses to OMB questions on request for clearance for the Early Childhood

Longitudinal Study, Kindergarten Class of 2010-2011




Original Q2. Please clarify if NCES is requesting that the 60 day notice be waived when it submits the full scale package to OMB. If so, this should be included in A1 of the supporting statement.


Original Response Q2: Yes, NCES will be requesting that the 60-day notice be waived for the submission of the full-scale package to OMB.A statement to this effect has been added to A1.


Follow-up Comment Q2. Please prepare an updated SS with this and other (below) information.


Follow-up Response: A sentence related to this issue has been added to page A-1. Changes to the supporting statement have been highlighted in yellow to make them easier to find. Thanks.



Original Q4. Please explain why the sampling plan refers to “APIs.” Does NCES believe that schools will still be collecting racial information on incoming kindergartners using the old race categories in 2010? Who is the group NCES really wants to oversample? Asians? Asians as well as the Native Hawaiians and Other Pacific Islanders? Does NCES plan to report results from these two groups together from ECLS-K: 11?


Original Response Q4: Ideally, all NCES studies would be able to generate sample sizes sufficient to support estimates for Asians and Pacific Islanders separately. In the ECLS-K, we oversampled Asians and Pacific Islanders together because, at that time, it was not common for schools to identify these groups separately. For consistency with the ECLS-K, we intend to oversample Asians and Native Hawaiians/Other Pacific Islanders as one group. Given our experiences in the ECLS-K, we believe we do not need to sample them separately to be able to report on them separately for the overall cohort in the kindergarten year. (In the first report ever produced using ECLS-K data, which was a descriptive look at the cohort, we reported estimates for them separately. See http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2000070).


To the extent possible, NCES will report estimates for Asians separately from Hawaiians and other Pacific Islanders. However, for some finer-level analyses, and for analyses in later years when our sample sizes become somewhat smaller due to attrition, it may not be possible to report on them separately. We realistically may only have sufficient sample to support separate estimates for Asians (see NCES 2008088 for an example, http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2008088) while including Pacific Islanders into an "other races" category along with children in other, less common racial groups (e.g., American Indians/Alaska Natives). Though not ideal given current reporting requirements, we have limited resources, which precludes us from oversampling Pacific Islanders so that we have enough in the sample to report out on them separately in more detailed analyses or in later years, and we are maintaining consistency with the sampling approach used for the first ECLS-K study, which will help to facilitate cross-cohort comparisons.


We do understand that schools are required to collect data on Asians and Native Hawaiians/Pacific Islanders separately by the 2009-10 academic year, so we may be able to obtain information from schools that identifies these children separately. However, given that we will be in schools in only the second year after this requirement is implemented, we are unsure how universal this reporting format will be, as well as how good the quality of the data will be. Also, it is our understanding that, for reporting purposes, states are allowed (by the US Department of Education) to consolidate into the following:

 

White, not Hispanic

Black, not Hispanic

Asian, Not Hispanic

Pacific Islander, not Hispanic

AIAN, not Hispanic

More than one race, not Hispanic

Hispanic of any and all races


We will collect information for each student in the study to determine if he or she is a member of the following racial and ethnic groups. These questions will be asked of the students’ parents. Such data will also be collected for the students’ parents, teachers, and care providers. So, even if we do not obtain race data in these categories from schools at the time of sampling, we will collect the data to be able to identify Asians separately from Native Hawaiians and Other Pacific Islanders.

 

1) Which race are you? (check all that apply):

White

Black

Asian

Pacific Islander

American Indian/Alaska Native

 

2) Are you Hispanic?

Yes

No


Follow-up Comment Q4. Substantively, we are fine with NCES’s plans. However, please change all references from “API” in the SS to the currently used categories in the OMB standard. Also, the question arrayed in this response does not comply with the OMB standard. We recommend the term “check one or more” over “check all that apply.” Further, the term Pacific Islander should be changed to “Native Hawaiian or Other Pacific Islander.”


Follow-up Response Q4: All references to APIs have been changed to "Asians, Native Hawaiians, and Other Pacific Islanders" or similar wording (see pages A-4, B-1 through B-8, B-28, B-30) Where appropriate, we have specified that these groups will be oversampled together as single group (pages A-4, B-1, B-7). We have also noted in a footnote on B-5 that, for the purpose of PSU selection, these groups will be included together in the stratification scheme.

In the survey instruments submitted with the revised package, we will be sure to include revised race/ethnicity questions with the instruction to "check one or more," and we will use the term "Native Hawaiian or Other Pacific Islander" in all study instruments, documentation, and reports.



Original Q7. Please clarify the second sentence of A16. What specifically is being released from the field test? Does this release include microdata?


Original Response Q7: “Base year collection” is referring to the full-scale, national data collection, not to the field test data collection. The data from the full-scale data collection will be made available to researchers as both public-use and restricted-use data files. NCES has not released the field test data as part of either its public-use or restricted-use data products. Only the reports of the field test results (not the field test data themselves) will be broadly available to researchers.



Historically, the field test data have been collected for the sole purpose of finalizing the data collection instruments, particularly the assessment battery. Aside from the assessment data, demographic characteristics of the screened children (e.g., date of birth, sex, and race), geographic location, and other descriptive data about the school will be shared with Educational Testing Services (ETS, the subcontractor responsible for ECLS-K:11 assessment development) for their analysis and development of the final assessment battery. These data are only shared with ETS for this specific purpose.



The ECLS-K:11 field test also will collect data from vision and hearing screenings for NEI and NIDCD. Aside from the screening data, demographic characteristics of the screened children, geographic location, and other descriptive data of the school will be shared with NEI and NIDCD for their analysis. They plan on using field test data to assess the operational feasibility of conducting the screenings in a large-scale collection, as well as to analyze the presence of vision and hearing difficulties in a larger sample than they are used to having data for. These data are only shared with NEI and NIDCD for this specific purpose.



Weights are not calculated for these data because they come from a purposive sample. As a result, they would have limited utility for general analysis. Following the practice from ECLS-K and ECLS-B, we do not plan on publicly releasing any data files from the ECLS-K:11 field test.



Follow-up Comment Q7. Please amend the SS with this clarification.


Follow-up Response Q7: The text in section A-16 has been revised to make this clarification.  



Original Q8. Is the national database of district cooperation requirements going to be an NCES-wide resource? Will NCEE also be able to use it?


Original Response Q8: Prior to going into the field for any national survey, Westat compiles a list of districts and their requirements using information that is publicly available on the sampled districts’ websites. This information is compiled in a spreadsheet to be used as a tool for district recruitment efforts. This spreadsheet can be shared with NCEE upon request. It is not, however, being prepared specifically as an NCES resource.


Follow-up Comment Q8. Our concern is that NCES should not have to pay a contractor to create this resource for each study if multiple studies can benefit from it. Please reassure us that the ECLS staff will make others at NCES and IES aware of this resource for use as warranted.


Follow-up Response Q8: We will obtain information about district requirements from Westat and make its availability known to others as warranted.



Original Q10. Please update A10 as well as all applicable references (e.g., in affidavits) to reflect current NCES applicable law and have Marilyn Seastrom review before resubmission to OMB.


Original Response Q10: Section A10 has been updated to refer to the most recent laws applicable. The affidavit has been replaced with the one currently available on the NCES website. All revisions reflected here were recommended by Marilyn Seastrom after her review of these materials.


As noted, some of the respondent materials in Appendix F are the materials used in the ECLS-K and will be updated and included in the revised OMB package. We will assure that the revised materials also use the currently approved language related to privacy. For example, on page F-15, the text in the right-most column that begins "The confidentiality..." will be updated. Additionally, in the letter for parents (page F-11), which had been updated for this package, the sentence ALL INFORMATION COLLECTED DURING THIS STUDY IS CONFIDENTIAL will be revised so that it does not use the word confidential in any form. We will ask Marilyn Seastrom to review these materials before submitting them with the revised package.


Follow-up Comment Q10. Please update the SS with this new language.


Follow-up Response Q10: Text throughout section A10 has been updated in particular so that it references the most recent applicable laws.



Original Q14. While we have not seen written results, we understand that HSLS recently completed an experiment which indicated that a $500 payment to schools made no difference in recruiting over no payment. Given this, please justify the continued payment of an “honorarium” to schools, let alone increasing it.


Original Response Q14: Our proposed honorarium to schools for the ECLS-K:11 is based on our experience in ECLS-K. The ECLS-K, which employed an honorarium, had relatively high school participation rates (75 percent or above in every round). We expect that, given the current economic state of the country and increased demands on schools from the federal government, the honorarium would be a stronger incentive for elementary schools now than it was in the ECLS-K..


Without access to written information about the HSLS experiment, it is hard to know exactly how the ECLS-K:11 plans for presentation and use of an incentive compare to what was done in the HSLS experiment. Our understanding is that schools identified to receive an incentive for which their districts refused participation did not have an opportunity to be informed of the incentive, which could have affected the results. We suspect that the HSLS experience is different from what we will encounter in the ECLS-K:11 since HSLS was recruiting high schools, whereas ECLS-K:11 is recruiting elementary schools and kindergartens. The competing demands on schools' time and the relative impact of an incentive for participating in a voluntary study for a high school may not be analogous for an elementary school. For example, our experience working with elementary schools has informed us that the current focus on mandatory testing in elementary schools required by NCLB is a large concern among schools. There is more burden related to mandatory testing for NCLB on elementary schools than on high schools. The demands of required testing crowd out non-instructional time for voluntary studies like ECLS-K:11, so schools are increasingly less likely to cooperate. The additional burden of a longitudinal survey makes securing cooperation in the base year even more problematic.


In addition, the perceived burden on the school may be different between the HSLS and ECLS-K:11. HSLS assessments were administered over the internet, whereas, in ECLS-K:11, a team of field staff will conduct one-on-one assessments with individual children. ECLS-K:11 field staff will be in the school collecting base year data (including one-on-one assessment data) for 4 days in both fall and spring. As a result, we suspect the time and resources asked of a school in the HSLS would be less than what would be asked of the schools in ECLS-K:11, and that without an incentive, the schools will be unwilling to participate.


NCES has strict requirements for response rates and has been struggling with the issue of falling response rates in its studies. For example, response rates for NHES fell to unacceptable levels which prompted a redesign of the survey. The concern for reaching acceptable response rates is particularly relevant for ECLS-K:11’s base year recruitment effort because if a sampled school is lost, the study loses access to an entire group of students (rather than one student). The successful use of incentives through the course of ECLS-K led us to propose continuing their use in ECLS-K:11. In ECLS-K, we offered schools $200 for their participation. For ECLS-K:11, we have increased that amount to $300 to account for inflation since the implementation of ECLS-K.. Given how crucial it is for the study to secure school cooperation, we feel it would be beneficial to use all methods available to us, including use of incentives, to try and do so.


Follow-up Comment Q14. We will not approve a 50% increase in school incentive unless NCES proposes an experiment or other specific evidence to demonstrate its necessity.


Follow-up Response Q14: We understand OMB's concern with offering a larger incentive without an experiment or other evidence indicating that it is necessary. We will offer schools a $200 incentive in ECLS-K:11, as we had done in ECLS-K. The supporting statement has been revised to refer to a $200 school incentive (see pages A-24, B-27).



Original Q15. Also, please clarify what is meant by “An average honorarium of $300 per school is recommended” in B.3.2. Did NCES give varying school incentive levels in ECLS-K? Is that what it is proposing to do for ECLS-K: 11?


Original Response Q15: In later rounds of the ECLS-K, the size of the honorarium varied depending on the number of participating children attending that school (the number of children could vary due to the presence or absence of movers, as well as study drop-outs). As a result, the honorarium was reported as an average in the package. In the base year of ECLS-K:11, all the participating children will be in the originally sampled school and the number of children sampled in each school will be relatively the same (i.e., there will be less variation in the number of participating children across schools in the kindergarten collection of the ECLS-K:11 than was the case in later rounds of the ECLS-K), so each school would receive an honorarium of $300.


Follow-up Comment Q15. Please clarify whether NCES is proposing a range of incentives for the full scale ECLS-K:11 and what the proposed maximum would be.


Follow-up Response Q15: In response to Follow-up Comment Q14, we will offer schools a $200 incentive in the ECLS-K:11 data collection, as we had done in ECLS-K. The text referring to an "average" honorarium has been removed from page A-24.



Original Q18. Please clarify the anticipated burden of both the vision and hearing screenings as there is a discrepancy between the burden table in A12 and the “Information Collection” document that was submitted in lieu of an actual instrument. What is the source of these burden estimates?


Original Response Q18: The anticipated burden is 30 minutes for the hearing screening and 15 minutes for the vision screening. The hours placed in the attachment for the vision and hearing screening were made in error; response totals were copied instead of the hours from the A12 table. A revised IC attachment is included with this memo. The revised attachment is now aligned with the hours that were reported in ROCIS.


Follow-up Comment Q18. Please prepare corrected attachments.


Follow-up Response Q18: They are being submitted along with the revised supporting statement. Upon approval, they will be uploaded to ROCIS.



Original Q19. What was the average burden on a kindergartner at any one data collection for ECLS-K and ECLS-B?


Original Response Q19: The average burden on a kindergartner at any one data collection for the ECLS-K was 45-60 minutes. In the ECLS-B, the average burden was33 minutes.


Follow-up Comment Q19. Please confirm that this understanding is correct. For Kindergartners, ECLS-K had an average burden of 45-60 minutes for any one data collection. ECLS-K:11 proposes an average burden for the Field test of 1.75 hours (for English-speakers), a 1.00 hour average burden for Full Scale fall administration, and an average burden of 1.5 hours for spring administration.


If so, please provide a statement specifically indicating why a doubling of burden is (1) appropriate and ethical for this age group, ideally including any other federal studies that have comparable burden and of (2) sufficiently high public utility compared to ECLS-K to warrant this burden increase.


Follow-up Response Q19: Your understanding is correct. The average burden in ECLS-K ranged from 46-60 minutes with 60 minutes being the upper limit. The additional time in ECLS-K:11 is for activities related to the vision and hearing screening. The 1.75 burden hours in the field test are estimated for the assessment (1.00), vision screening (.25), and hearing screening (.50). The 1.00 burden hours for fall full scale is for the assessment only and the 1.5 burden hours for the spring full scale is for the assessment (1.00) and hearing screening (.50). As mentioned in response to another question, at this time we do not plan to conduct the vision screening in the kindergarten full scale data collection, though we may request approval for its inclusion in future rounds, pending the outcome of the feasibility study in the field test.


We reviewed the timings for child assessments for large-scale studies conducted by organizations other than NCES that include a population of children close in age to the children who will be included in the ECLS-K:11. Specifically, we obtained information on the Pre-elementary Education Longitudinal Study (PEELS), the Special Education Elementary Longitudinal Study (SEELS), the Head Start Impact Study (HSIS), and The Head Start Family and Child Experiences Survey (FACES). The timing information we could find in published information and from study staff ranged from 40 minutes to 60 minutes. SEELS had the highest child burden at 60 minutes, though this burden included both direct child assessments and a short student questionnaire. The timings for FACES, PEELS, and HSIS were 40 minutes, 40 minutes, and 55 minutes, respectively, for the cognitive assessments.

The cognitive assessments in the ECLS-K:11 are expected to take about as long as the cognitive assessments in these other studies. The additional hearing and vision screenings will not be as demanding on the child and do not require the same degree of sustained attention as the cognitive assessment; some screening activities included in the field test do not require a response from the child at all. The tasks will offer a break from the assessment situation in that the child will be able to move around and participate in novel activities.


The timings we have provided in the burden statement, which are noted above, are estimates of the maximum amount of time we expect the assessments could take. The burden of the field test child assessments is fairly well established based on our experiences in the ECLS-K. However, the burden of the two screenings is not well established in a study of this size. During the course of the field test, we will be identifying ways to make the protocol more efficient, and thus less burdensome. We are sensitive to the issue of burden for children and we do intend to monitor how they behave during the screenings, specifically whether it is too much for them after they complete the cognitive assessments. We have precedent from the ECLS-K field tests where we have initiated a stopping rule if the assessment is running too long for children. We will use the same procedures for the ECLS-K:11. Additionally, if the field test shows that the burden appears to be too great for most children to handle, we will not include the hearing screening in the full scale collection.


Screening for vision and hearing has high public utility, which is why it is included in ECLS-K:11. Approximately 15 percent of U.S. children aged six to nineteen have a measurable hearing loss in one or both ears (Niskar et. al., 1998). Any degree of hearing loss can be educationally handicapping for children. Even children with mild to moderate hearing losses can miss up to 50 percent of classroom discussions. Unmanaged hearing loss in children can affect their speech and language development, academic capabilities and educational development, and self-image and social/emotional development (Impairments in hearing can contribute to deficits and speech and language acquisition, poor academic performance, and social and emotional difficulties (Cunningham, et al., 2003). Otitis media is a leading cause of acquired hearing loss. Other contributors include trauma to the nervous system, damaging noise levels, or medications. The American Academy of Audiology recommends that all children be screened for hearing loss at least once during the preschool years. They also recommend that hearing loss be ruled out whenever a child is being considered for special education services (American Academy of Audiology, 1997). Inclusion of a hearing screening in the ECLS-K:11 will provide researchers with a unique ability to look at associations between hearing loss and a host of educational experiences and outcomes in a large-scale nationally representative study, to examine the emergence of hearing difficulties across time, and to see whether and how the timing of the emergence of hearing difficulties may be related to both environmental factors and educational experiences and outcomes.


Impairments in vision can also lead to learning and socio-emotional difficulties. About one in four school-aged children have vision problems including amblyopia (lazy eye), strabismus (crossed eye), and myopia (nearsightedness). Studies find that there are racial and ethnic differences in the prevalence and incidence of refractive disorders. A study of 2,523 children in Birmingham, Alabama found that 33.6 percent of Asian children and 36.9 percent of Hispanic children had astigmatism of 1D or more (Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error Study Group, 2003).


As mentioned previously, the field test will assess the feasibility of broadening the use of equipment for vision screening (the EVA) that has only been used in clinical settings. NEI has a keen interest in the results of this test, because the EVA is much less expensive than other equipment more commonly used in broader settings; positive results from the field test could result in this equipment being made available and used more broadly, especially in school settings, which, in turn, would result in more school-age children being able to have their vision tested in schools.


We would like to clarify the information we provided in our first response about the assessment time for the kindergarten rounds of the ECLS-B, which was 33 minutes. This was the timing only for the cognitive assessments. The gross and fine motor skill assessments and physical measurements (height, weight, MUAC, and head circumference (for children born with very low birth weight only)) took approximately 10 minutes, for a total of 43 minutes. The burden in the preschool collection was actually higher at about 1 hour and 5 minutes. The cognitive assessments were approximately 45 minutes, the direct assessment of socioemotional functioning was 10 minutes, and the motor skills and physical assessments were 10 minutes.



Original Q21. OMB is encouraging each principal statistical agency to contribute the results of cognitive (and subsequent field or other) testing for any newly or recently tested content to the interagency “Q Bank,” currently housed at NCHS. Please indicate what ECLS-K: 11 will be able to contribute and approximately when.


Original Response Q21: We expect that the questionnaires fielded in the base year of ECLS-K:11 will have few new items. Most items will have been previously fielded either in ECLS-K, ECLS-B, or other studies. As a result, we currently have no definitive plans for cognitive testing. If new items (i.e., those that have not been previously fielded) are included in the ECLS-K:11 questionnaires, we will test these items and submit the test results to the interagency “Q bank” in 2010 if the selected items are included in the full- scale data collection.


Follow-up Comment Q21. We would like items to be submitted to the Q-bank even if NCES determines not to use them, as it is designed to be a learning tool (i.e., results of what “didn’t work” should be submitted alongside those that do).


Follow-up Response Q21: We're sorry that our original response was unclear. We will submit the results for any items that are tested, regardless of whether they are included in the full-scale collection. When we implicitly alluded to items that do not end up being included in the full-scale collection, we meant items that we have considered during instrument planning and ultimately dropped without testing (either in the field test or in cognitive testing), not ones we have tested and dropped because they didn't work.



Original Q28. Please clarify why there is apparent duplication across the teacher and administrator questionnaires (e.g., school lunch, half day versus full day kindergarten).


Original Response Q28: While there are similar items in the teacher and administrator questionnaires (e.g., school lunch, half day versus full day kindergarten), the population on which each respondent is reporting will be different. School administrators are asked to report on programs provided within the school and their responses, therefore, relate to the entire school (as in the case of the school lunch program) or to the programs available to the entire population of students in a particular grade (e.g., kindergarteners). Teachers, on the other hand, are asked to describe the students in their own classrooms and the programs available to those students and in which those students participate. We did have discussions with the Technical Review Panel about whether it was necessary to obtain information on certain topics (e.g., availability of computers) at the both the school and classroom levels. We will have further discussions about this over the next few months as we finalize the instruments.


Follow-up Comment Q28. To the extent that duplication remains for the full scale instruments, NCES should specifically justify those in its next submission.


Follow-up Response Q28: To clarify our previous response, there is no exact duplication between the full scale instruments in their current form. Each respondent (e.g., teacher or school administrator) reports on a different population (e.g., classroom versus entire school). We do not plan on having duplicate items, asked about the same person or group, across any instrument in the full scale data collection. However, if there is any duplication in the full scale instruments, NCES will specifically justify those in the next submission. For items that appear similar, we will also indicate to what person or group they refer.


Additional Questions for the Special Education Questionnaire (Appendix D)


Questionnaire A


Q12. We are interested in knowing whether/how much coursework the teachers have had in Response to Intervention or Early Intervening Services techniques, but aren't sure how prevalent it may be to offer related to early intervening techniques. Can NCES (perhaps in consultation with colleagues at NCSER) look into this issue as a possibility for the full scale study?


Response: Yes, we will look into this issue as a possibility for the full scale study and consult with the NCSER staff member providing us with substantive support.


Q15. How is general education vs special education classroom defined? I suggest that if a class has half or more kids with IEPs then it be classified as a special education class and general education for half or less.


Response: In previous rounds of ECLS-K, we have not included a definition for general education classroom or special education classroom. As this is a self-administered questionnaire, we have let the respondents define these for themselves, and we have had very small numbers of missing data (0.1% non response for each of these items in the last round of data collection). In the interest of keeping the items similar for cross-cohort comparisons, we propose keeping these items as they were presented in ECLS-K. Researchers can use this information in conjunction with other information we collect from the teacher about the number of children in the classroom who have a disability, have an IEP, receive special services, etc. to see how teacher reports of the type of classroom line up with the characteristics of their students.


Q16. How are the responses to this answer used? I'm not immediately seeing the utility of this question.


Response: The identical question (at least parts a, b, and c) is in the Teacher Questionnaire. The item is repeated here for comparison. Parts d and e were added at the request of staff in the Office of Special Education Programs (OSEP) to capture some of the current concerns of special education teachers.


Questionnaire B


Q1. Please include students receiving services through IFSPs in this question.


Response: The special ed questionnaires submitted in the revised package will incorporate this recommended change.


After Q5, please add a question about whether the child had an IFSP during the year prior to kindergarten.


Response: The special ed questionnaires submitted in the revised package will incorporate this recommended change.


Q13. How are Rehabilitation Services defined? Does it include vocational services? If not, please include a category for employment services.


Response: The categories in question 13 are the authorized services under IDEA. “Rehabilitation services” are vocational- and employment-focused.


How are the responses to Q22 used?


Response: For Q22, this is a follow up question to Q21 on assistive technology. Note that "computer hardware..." is one of the Q21 sub-items, which asks if the technology is for the student's sole use or is shared. Q22 asks if it is available to the student full time. These items can be used to assess access to various assistive technologies (specialized computers, in particular) for children with disabilities and how that access relates to various contextual factors and to children’s outcomes.


Q26. Please add a Vocational/Employment category.


Response: The special ed questionnaires submitted in the revised package will incorporate this recommended change.

Page 11 of 11

File Typeapplication/msword
File Title2001
AuthorDaniel Princiotta
Last Modified ByECLS
File Modified2009-03-18
File Created2009-03-18

© 2024 OMB.report | Privacy Policy