Case Studies rev SS A

Case Studies rev SS A.docx

Case Studies of the Implementation of Kindergarten Entry Assessments

OMB: 1875-0273

Document [docx]
Download: docx | pdf




August 22, 2014

U.S. Department of Education

PPSS TO 17: Case Studies of the Implementation and Use of Kindergarten Entry Assessments


Supporting Statement Part A: Justification

Third Draft OMB Package



Contract Number GS-10F-0554N; Order Number ED-PEP-11-O-0090

SRI Project P22275.431




Submitted to:


Erica Lee

Policy and Program Studies Service

Office of Planning, Evaluation and Policy Development

U.S. Department of Education

400 Maryland Avenue, SW

Washington, DC 20202



Prepared by:

Teresa McCaffrey, Kaily Yee, Shari Golan, Elizabeth Mercier, and Tracy Huang

SRI International




Contents

A. Justification 1

A.1. Circumstances Requiring the Collection of Data 1

A.2. Purposes and Use of Data 3

A.3. Use of Information Technology to Reduce Burden 4

A.4. Efforts to Identify and Avoid Duplication 4

A.5. Efforts to Minimize Burden on Small Businesses or Other Entities 4

A.6. Consequence if the Information Is Not Collected or Collected Less Frequently 4

A.7. Special Circumstances 4

A.8 Federal Register Comments and Persons Consulted Outside of Agency. 4

A.9. Payment to Respondents 7

A.10. Assurances of Confidentiality 7

A.11. Justification for Questions of a Sensitive Nature 8

A13. Estimate of Cost Burden to Respondents 11

A14. Estimate of Annualized Costs 11

A.15. Change in Annual Reporting Burden 11

A.16. Plans for Tabulation and Publication of Results 12

A.17. Expiration Date Omission Approval 13

A. 18. Exceptions 13

References 14


Appendices

A. Consent Form

B. Recruitment Materials

C. Document Review Protocol

D. State Version Interview Protocol

E. District Version Interview Protocol

F. Principal Interview Protocol

G. Kindergarten Teacher Interview Protocol

H. Test Administrator Interview Protocol

I. Preschool Director Interview Protocol

J. Professional Development Provider Interview Protocol




Exhibits

Exhibit 1. Evaluation Questions 2

Exhibit 2. National Experts in the Technical Work Group 6

Exhibit 3. Expected Time Burden by Participant 10

Exhibit 4. Estimated Cost of Participants’ Time 11





Supporting Statement, Part A
Paperwork Reduction Act Submission

A. Justification

A.1. Circumstances Requiring the Collection of Data

In the last decade, interest has increased in implementing Kindergarten Entry Assessments (KEAs) to better understand individual children’s strengths and needs, plan instruction, pinpoint areas for program improvement and staff development, and evaluate the effectiveness of PreK programs to inform program planning and improvement. To this end, through the Race to the Top – Early Learning Challenge (RTT-ELC) initiative, the federal government made funds available for states to propose a comprehensive approach to improving the quality of early childhood education programs, including the use of a KEA. All 20 states that have received RTT-ELC grants to date plan to implement a KEA statewide by the end of their grant. In addition to the RTT-ELC initiative, the Department’s competition for Enhanced Assessment Grants (EAG) funded three consortia of states to further develop existing KEAs. This expanded federal funding on KEA development and implementation to another nine states and the District of Columbia. Finally, states that did not receive federal funding are also developing KEAs. However, while many states are working toward KEA adoption and implementation, currently “no state in the nation has a complete comprehensive assessment system” (Schilder & Carolan, 2014).

The types of KEAs and their stage of development range within states. All 20 states that received RTT-ELC grants from the Department plan to implement a statewide KEA by the end of their grant. All states that received RTT-ELC funding and that chose to implement a statewide KEA are required to follow the set of federal guidelines in the RTT-ELC applications, including use of a KEA that covers all the Essential Domains of School Readiness, follows guidelines of the National Research Council reports on assessment of young children, is aligned with its state’s Early Learning and Development Standards, and is valid and reliable for the intended purposes and target populations, including English learners and children with disabilities and other special needs. The RTT-ELC also mandates that the KEA be administered during the first few months of kindergarten, the results be reported to statewide data systems, and the KEA be funded in significant part with federal or state resources other than those available under the RTT-ELC grant. However, because the Department does not require a specific assessment tool, mode of administration, teacher training requirements, or ways data and results are shared, great variation in KEA implementation across states is likely to result.

The next few years are a critical juncture in the selection and implementation of KEAs. States are making numerous decisions about implementation of selected KEAs, including the selection of the assessment, choices about its administration, and decisions about how the KEA data and results will be used (Golan, Petersen, & Spiker, 2008). Other areas that states will consider are the primary purposes each state has identified for its KEAs, what type of KEA to adopt or develop, meeting the needs of culturally and linguistically diverse students and students with disabilities, supporting the administration of the KEA and use of the data produced to inform instruction, how the KEA will become part of a comprehensive assessment system (Howard, 2011), and ensuring alignment with the state’s intended purpose.

The purpose of the KEA implementation case studies is to document the processes, accomplishments, challenges, and solutions of four states implementing KEAs, and to share what state, district, and school personnel have learned with federal and state policymakers, and practitioners in the field. These findings will support the technical assistance efforts of the U.S. Department of Education (the Department) regarding the implementation of KEAs across the nation.

The study’s evaluation questions are organized according to four overarching questions regarding KEA adoption, implementation, use of results to inform practice and policy, and lessons learned (Exhibit 1). Sub-questions are intended to inform the development of data collection protocols and report outlines.

Exhibit 1. Evaluation Questions

I. How have KEAs been developed and adopted by four states that were early to adopt a comprehensive KEA?

1. How were KEAs developed or chosen (e.g., timeline, participants, content decisions)?

a. How were various stakeholders engaged in selecting a KEA?

b. For what key purposes was the KEA selected or developed?

c. What key criteria were used in the selection of the KEA (e.g., cost, burden, use with certain populations, connection to a PreK or K–3 assessment)? What were the key concerns of stakeholders?

d. Why did the state choose an existing KEA measure or decide to develop its own (or some components of its own)? What alternative measures were considered and why were they not selected?

2. What are the characteristics of the specific KEAs selected?

a. What skills and domains are assessed?

b. What evidence or types of information are collected (e.g., observation, direct assessments, parent report)?

c. Is use of the KEA mandatory or optional for districts? What factors led to that decision?

3. Was the KEA pilot- and field-tested? What has been learned about its validity and reliability for its intended purposes and populations? What implications are there for its use and further development?

4. How is the KEA funded (up-front and annually)? What is the overall and per-student cost to implement the KEA? How will funding be sustained?

II. How are KEAs being implemented by 12 school districts within four states that were early adopters?

5. What general information and training on the administration of the KEA are provided to teachers, administrators, and parents?

a. What types of training models are used?

b. How well prepared were people to administer the KEA? What additional support would have been helpful?

6. How are KEAs administered?

a. When are KEAs administered?

b. Who administers the KEAs, and what training and support did those individuals receive?

c. How much time do the KEAs require of district staff, school administrators, and teachers to implement?

d. Are families involved in providing information? If so, how?

e. How is technology used to support KEA administration and scoring?

f. What strategies are used to help ensure that children with disabilities and English learners are accurately assessed?

g. What strategies are used to help ensure that English learners are accurately assessed?

h. How does the KEA fit with existing district and PreK assessments?

i. What aspects of the KEA implementation are standardized across districts and schools, and what aspects involve local discretion?

7. How are KEA data shared with the state? (e.g., Are results added to a student longitudinal data system or another permanent student record?)

8. How do states and districts monitor and evaluate KEA implementation?

III. How do these four states and their districts and schools communicate and use KEA results to inform policy and practice?

9. Who receives the results of the KEAs (e.g., parents, PreK directors, kindergarten teachers, principals and early childhood administrators, the SEA)? How are results communicated to stakeholders?

a. What information do they receive about the results?

b. Who has access to results and in what form and on what schedule?

10. What supporting information or training do stakeholders receive to help them interpret or use the results?

11. How do various stakeholders use KEA results?

a. How aware of KEAs are PreK program directors (including PreK, Head Start, community-based PreK programs, child care)? Do KEAs inform PreK program improvement?

b. How do teachers use KEA results to inform their classroom practice?

c. How do principals use KEA results to inform decisions at the school level (e.g., classroom assignment for individual children, budget allocations, curriculum, or professional development)?

d. How do districts use KEA results to inform curriculum or professional development?

e. How do states review and use KEA results by student demographic characteristics, school district, or by PreK program?

f. How do states use KEA results to inform PreK policy, curriculum, or other programmatic decisions?

12. Are there any policies regarding how KEA data should not be used (e.g., determine promotion or prevent kindergarten entry in any way)? If so, how are these policies communicated and enforced?

IV. What lessons did states, districts, and schools learn about KEA adoption, implementation, and uses?

13. What lessons can states learn about KEA adoption? For example, what processes were helpful, and what challenges needed to be overcome?

14. What lessons can states, districts, and schools learn about KEA implementation? For example, what has gone well and what has been challenging with implementation according to state, district, and school staff? What efforts have staff undertaken to overcome challenges?

15. What lessons can states, districts, and schools learn about using KEA results to inform policy and practice?



A.2. Purposes and Use of Data

The purpose of the case studies is to document the processes, accomplishments, challenges, and solutions of four states that are early adopters of KEAs and to share what they have learned with federal and state policymakers and the field. In consultation with the Departments of Education and Health and Human Services, the study team used the following criteria to narrow down the number of states for inclusion in the study:

  1. The state’s KEA covers all five domains of school readiness.

  2. The state is implementing its KEA in the beginning of kindergarten and not at the exit of PreK.

  3. The state is planning to be in full implementation of KEA by fall 2014.

  4. The state is not part of the North Carolina KEA-EAG consortium to avoid a conflict of interest. SRI International, the principal researcher on this Task Order, is helping North Carolina and its nine partner states to enhance their KEA as part of their KEA-EAG.

In addition to meeting these fundamental criteria, the Department and HHS further recommended that at least one of the selected states represent one of the following:

  1. A RTT-ELC grantee

  2. A non–RTT-ELC grantee state, if possible

  3. A EAG state

  4. A user of a commercially available KEA assessment tool

  5. A user of a KEA developed by the state

The number of states, districts, and schools follows the guidelines set by the Department in their original performance work statement.

The data collected during this study will support the Department’s technical assistance efforts and inform KEA efforts across the nation.


A.3. Use of Information Technology to Reduce Burden

The study team will use information technology to reduce burden on selected states, school districts, and schools in two ways. First, the study team will access state, district, and/or local school websites to gather information about the states, school districts, and schools (e.g., demographic data, publicly available information on KEA implementation). Second, the study team will use the extant information to refine and tailor interview protocols for each site so that the interviews focus only on issues not already covered in documents.

To schedule interviews, the study team will communicate with potential state, district, and school-level respondents primarily via email. The study team will digitally record phone and in-person interviews with the permission of the respondent. The recordings will help reduce errors in field notes by capturing respondents’ verbatim responses, thereby minimizing the study team members’ requests for clarifications after the site visit.

A.4. Efforts to Identify and Avoid Duplication

There is limited research on KEA implementation and there are no federal collection efforts that collect the level of detailed information that this study will collect. The study team will coordinate with the relevant offices within the Departments of Education and Health and Human Services to minimize burden on participants and avoid duplication of efforts. For example, the Institute of Education Sciences is conducting a study of Tiered Quality Rating and Improvement Systems (TQRIS) in the nine RTT-ELC states awarded grants in 2011. As part of that study, IES is assessing the feasibility of conducting a correlational study of TQRIS ratings and child outcomes on KEAs. The IES study will interview nine state officials (from the states awarded grants in 2011) to get broad information on the KEA data available for a correlational analysis. While the SRI study team will likely collect data from some of the same individuals, the SRI study will probe issues about KEA implementation more deeply.

A.5. Efforts to Minimize Burden on Small Businesses or Other Entities

The study team anticipates no burden on small businesses because data collection will be limited to state and local education agencies and schools.

A.6. Consequence if the Information Is Not Collected or Collected Less Frequently

If the proposed information is not collected, the federal government will miss the opportunity to provide timely and practical information to local and state policymakers and school and district personnel on the implementation of KEAs, including what is working well and the challenges associated with this work and strategies for overcoming them. The federal government will also lose the opportunity to have information that may help it provide more targeted and effective technical assistance to states on KEA implementation.

A.7. Special Circumstances

This information collection will not be conducted in a manner that will require using any special circumstances.

A.8 Federal Register Comments and Persons Consulted Outside of Agency.

a. Federal Register Announcement. A 60-day notice to solicit public comments was published in the Federal Register on May 14, 2014 (Volume 74, Number 94). Over the 60-day notice, the Department received comments from four individuals or organizations. One comment was unrelated to the study and a second comment misunderstood the amount of burden contained in A.12 (the Department communicated with the second commenter to clarify the burden). Two organizations provided substantive comments on the study design and protocol questions. Each organization’s comments are described below, as well as the Department’s response.

Comments: The commenter indicated it supported the Department’s study of KEAs. The commenter suggested a number of selection criteria for the states and districts the Department selects for study. For example, the commenter suggested studying states that focus on a “whole child” approach in their KEA, aligning KEAs with other child assessments, providing professional development support around KEAs, and using KEA data to inform classroom instruction and other decisions.

Department response: The Department agrees that a focus on the whole child is vital and toward that end the Department will study states with KEAs that cover all five domains of school readiness. However, the Department does not have information on how and whether states and districts are focusing on the other topics suggested by the commenter. Rather, the purpose of the study is to find out more about these exact topics. The Department’s basic selection criteria includes the following:

  1. The state’s KEA is comprehensive and covers all five domains of school readiness.

  2. The state is implementing its KEA in the beginning of kindergarten and not at the exit of prekindergarten (PreK).

  3. The state is planning to be in full implementation of KEA by fall 2014.

  4. The state is not part of the North Carolina KEA-EAG consortium to avoid a conflict of interest. SRI International, the principal researcher on this Task Order, is helping North Carolina and its nine partner states to enhance their KEA as part of their KEA-EAG.

Only five states met all four basic criteria, two of which are using the same assessment. For additional information about state and district selection, please see Supporting Statement, Part B.1.

Comments: The commenter indicated it supported the Department’s study of the KEAs. The commenter noted the importance of the Department’s study being sensitive to the policy contexts in different states around the development and implementation of KEAs and other assessments used in kindergarten. Additionally, the commenter recommended selecting states that had various experiences and selecting more than four states (even at the expense of visiting fewer districts and schools in each state). The commenter also offered a number of suggestions for specific questions that the study protocols should address.

Department response: The KEA study will provide information on policy contexts and the selection, implementation, and use of KEA assessments and data. The study will also gather information on other assessments states, districts, and schools use during kindergarten. The Technical Working Group discussed the number of states the study should include and the experts agreed that, given the resources, visiting four states is appropriate. Additionally, the Department used four basic criteria to decide which states to include (see above for criteria) and only five states met all four basic criteria, two of which are using the same assessment.

The Department appreciates the specific questions that the commenter suggested. The Department believes that most of the suggested questions are already included in the protocols. However, the Department added the following questions to the protocols to address the commenter’s recommendations:

State protocol

5.b.i. Were there any purposes for which your state originally wanted to use its kindergarten entry assessment, but [local KEA name] was not valid for that purpose?



2.h. How does [local KEA name] fit in with other assessments your state is using (e.g., Comprehensive Assessment System for early learning in the RTT-ELC states)

2.i. How does [local KEA name] fit with other early learning initiatives or efforts your state has underway (e.g., QRIS, pre or inservice PD, expansion of PreK or full day kindergarten)

28. How do state education and other state-level administrators (e.g., DHHS or Dept. of Early Learning) use [local KEA name] results to inform decisions about issues statewide or those concerning specific districts or preschool programs?

District protocol

30. What have been the biggest challenges with trying to use the [local KEA name] results?

(Use probes below.)

  • Lack of alignment with standards, If so, how? [NEW probe]


Principal protocol

28. What have been the biggest challenges with trying to use the [local KEA name] results?

(Use probes below.)

  • Lack of alignment with standards, If so, how? [NEW probe]


Kindergarten teacher protocol

1. Is your Kindergarten class a full-day or a half-day program?

b. Consultation Outside the Agency. SRI received consultation through a technical work group (TWG) of five national experts with knowledge of assessment of young children, transition and school readiness, bilingual development and diverse cultures, and assessment of children with disabilities and other special needs, as well as through school administrators and kindergarten and PreK teachers (see Exhibit 2). The TWG will meet in person twice. At the first TWG meeting the study team obtained input, comments, and feedback on (1) the protocols for state-, district- and school-level interviews, and document review; (2) the four recommended case study states; and (3) district and school selection criteria. Based on TWG input SRI revised the study design to remove observations of a small number of professional development trainings in exchange for telephone interviews with 12 professional development providers, 3 per state. Additionally, SRI revised the study design to include 2 additional state-level respondents (for a total of 7 per state), as the TWG felt it was important to speak with more state-level respondents to better understand how the KEA was selected. Finally, the TWG recommended speaking with more preschool directors. As such, SRI revised the study design to increase the number of interviews with preschool directors from 24 to 72 and to no longer interview preschool teachers (changed from 24 interviews to 0). Compared with the 60-day Supporting Statement A, these revisions increase total public burden by 38 hours, or about 13 hours annually. The focus of the second meeting is to obtain input, comments, and suggestions on the key findings in the final report.


Exhibit 2. National Experts in the Technical Work Group

Name

Professional Affiliation

Samuel J. Meisels

Director, Buffet Early Childhood Institute, University of Nebraska

Megan McClelland

Associate Professor, College of Public Health and Human Sciences, University of Oregon

Eugene Garcia

Professor Emeritus, Arizona State University

Mary McLean

Professor and Director of the Early Childhood Research Center, University of Wisconsin-Milwaukee

Jason Sachs

Director of Early Childhood, Boston Public Schools

A.9. Payment to Respondents

The study team will provide preschool directors a $25 gift certificate to incentivize their participation in the study. No payment to other respondents will be offered.

A.10. Assurances of Confidentiality

The study team will adhere to federal rules regarding the protection of human subjects in research. The study team has a duty to protect all information but particularly anything sensitive or potentially embarrassing to individuals. The following provisions will apply on this project:

  • Responses to this data collection will be summarized in an aggregate manner (e.g., across all schools in a district) or will be used to provide examples of KEA implementation in a manner that does not associate responses with a specific individual. The study team may refer to the generic title of an individual (e.g., "district KEA coordinator”), but no individual, school or district will be named.

  • As part of the case study training, all members of the study team will be trained on data confidentiality. Specifically, the study team members will be trained on how to store data without individual names and how to discuss interview and other case study data only within a team context for analysis purposes.

  • As part of obtaining consent, study team members will inform each respondent that his or her participation in the project is voluntary, that respondents may cease participation at any time during the interview, and that their individual responses will be kept confidential to the extent possible, except as may be required by law. The study team will provide this information orally as well as in writing in the consent form. All respondents will be asked to sign the relevant consent form (see Appendix A for a copy of the interview participant consent form).

  • The voluntary nature of project participation, the confidentiality provisions, and consent forms are subject to and overseen by SRI’s Human Subjects Committee for human subjects research.

  • All electronic data will be stored on SRI’s secure server. Access to the server is password protected, with required changes at regular intervals and strong password elements. Each user’s access is limited, determined by the network administrator.

  • Names and addresses will be dissociated from the data as they are entered into the study team’s database and will be used for data collection purposes only. A unique identification number will be assigned to individuals and sites as data are collected that will be used in printouts that display the data and analysis files. Unique identification numbers will also be used for linking data. No names, addresses, or other information that could connect the individual to responses will be used in interview write-ups or case study reports. The study team will not provide information that associates responses or findings with a subject or district to anyone outside the study team.

  • All electronic recordings of interviews, interview notes, and other project-related documents will be stored in secure areas that are accessible only to authorized staff members.

  • All interview notes, forms, and other hard-copy documents containing identifiable data will be shredded as soon as the need for this hard copy no longer exists.

  • All basic computer files will be backed up on secure servers to allow for file restoration in the event of unrecoverable loss of the original data. Backup files will be stored under secure conditions in an area separate from the location of the original data.

Because the objective of the project is to provide information that is useful to other practitioners or policymakers striving to improve KEA implementation, it would be valuable to disclose the state names. Naming the states will be helpful to others who want to learn more about how the KEA has been implemented and explore whether they might adopt or adapt the strategies or practices to their local contexts. However, as noted above, no individual, school, or district will be named.

As the lead in the data collection for the study, SRI will adhere to the Multiple Projects Assurance with the Office of Protection from Research Risks (OPRR) maintained by SRI. SRI's Assurance number is
M-1088. SRI’s Human Subjects Committee is its official Institutional Review Board charged with responsibility for the review and approval of all research involving human subjects. SRI clears all data collection protocols through the internal Human Subjects Committee as a safeguard to protect the rights of research subjects.

A.11. Justification for Questions of a Sensitive Nature

The data collection instruments do not include sensitive questions. However, given that the subpopulation of concern includes English learner students and students living at various levels of poverty, it is possible that information on students’ financial, family, and social needs may arise during data collection. In SRI’s experience, teachers and administrators are very careful not to disclose confidential information because they deal with student privacy concerns daily. The study team will remind them at the beginning of interviews not to provide sensitive information about themselves or others by name.

A.12. Estimate of Information Collection Burden

This request relates to two different data collection activities: (1) collection and review of documents (e.g., planning guides, state or local reports, technical or training manuals) and (2) interviews with staff at the state, district, and school levels.

The document review should impose a minimal burden on respondents. The document review will primarily involve collecting state and local documents from websites (e.g., KEA plans and requirements, overviews, reports, and training materials), from the Department (e.g., RTT-ELC proposals and progress reports), and from key informants from the four participating states, districts, and schools (e.g., general information, training materials, and sample reports about KEA results).

To learn about KEA implementation from various perspectives, the study team will interview seven state-level respondents in each of the four states by phone, one professional development (PD) provider linked to each case study district by phone, and three preschool program directors linked to each case study school by phone. The study team will conduct in-person interviews with three participants per district and with four school-level participants per school. Across all levels, the study team will conduct a total of 244 interviews (encompassing 24 schools in 12 districts in four states). Gathering different perspectives from all levels of participation is critical for the study team to understand, authenticate, and convey how different system actors experience the implementation of the KEAs. For instance, a state official could discuss how the state intended for kindergarten teachers to use KEA data to inform their classroom practice. However, the kindergarten teachers of that state may not be using the data in the intended way because of a lack of training or logistical problems associated with the reporting of results. If interviews were conducted only with the state officials, data collection would not adequately capture and triangulate information about the kindergarten teachers’ challenges.

In addition to the burden placed on participants who will be interviewed, there will be administrative burden on individuals (e.g. administrative assistants and secretaries, at the state, district, and school-levels) to assist the study team in collecting relevant documents for review and setting up interviews with participants. Exhibit 3 displays the projected time burden on the various participants.

Exhibit 3. Expected Time Burden by Participant

Participant


Number of Participants (one response per participant)

Time per Participant

(minutes)

Total Time Burden (hours)

State-level participants


State early childhood specialist

4

90

6

Other state-level respondents

24

90

36

Administrative assistance in providing documents and scheduling interviews

4

60

4

District-level respondents


Superintendent

12

60

12

Early learning director

12

60

12

Assessment director

12

60

12

Professional development provider

12

60

12

Administrative assistance in providing documents and scheduling interviews

12

60

12

School-level respondents


Principal

24

45

18

Kindergarten teachers

48

45

36

PreK directors

72

45

54

Other test administrators

24

45

18

Administrative assistance in providing documents and scheduling interviews

24

90

36

Total

284


268 hours

Annualized basis 95 89 hours and 20 minutes

Although respondents suffer no direct monetary costs for this activity, their time is valuable, as estimated in Exhibit 4. The salary used to calculate hourly rates for each group of respondents is based on Bureau of Labor Statistics data from April 2014.

Exhibit 4. Estimated Cost of Participants’ Time

Participant

Total Time Burden

(hours)

Hourly Wage

(estimated $)1

Total Cost

(estimated $)2

State-level respondents

State early childhood specialist

6

45

270

Other state-level respondents (e.g., state superintendent, director of assessment, director of professional development, and state data systems director)

36

45

1,620

Administrative assistance in providing documents and scheduling interviews

4

16

64

District-level respondents




Superintendent

12

45

540

Early learning director

12

45

540

Assessment director

12

45

540

Professional development provider

12

30

360

Administrative assistance in providing documents and scheduling interviews

12

16

192

School-level respondents




Principal

18

45

810

Kindergarten teachers

36

25

900

PreK directors

54

34

1836

Test administrators

18

25

450

Administrative assistance in providing documents and scheduling interviews

36

16

576

Total

268 hours


$8,698

Annualized Costs 89 hours and 20 minutes $2,899


All estimates are from the Bureau of Labor Statistics (BLS), data.bls.gov, retrieved on April 24, 2014. The hourly wages of state-level and district-level respondents and principals were all derived from BLS occupation code 11-9032 – “Education Administrators, Elementary and Secondary Schools.” The hourly wages for the professional development providers were derived from BLS occupation code 25-9031 – “Instructional Coordinators.” The hourly wages for the kindergarten teachers and school-based test administrators were derived from BLS occupation code 25-2010 – “Preschool and Kindergarten Teachers.” The hourly wages for PreK directors were derived from BLS occupation code 11-9031 – “Education Administrators, Preschool and Childcare Center/Program.” The hourly wages of individuals providing administrative assistance were all derived from BLS occupation code 43-6014 – “Secretaries and Administrative Assistants, Except Legal, Medical, and Executive.”

2 Costs are rounded to the nearest dollar.

A13. Estimate of Cost Burden to Respondents

There is no capital or start-up cost component to these data collection activities, nor is there a total operation, maintenance, or purchase cost associated with the study.

A14. Estimate of Annualized Costs

The estimated annualized cost of the study to the federal government is $387,238. This estimate is based on the total contract cost of $871,286, amortized over a 27-month performance period. It includes costs already invoiced, plus budgeted future costs that will be charged to the government for the study redesign, sampling, data collection, analysis, and reporting.

A.15. Change in Annual Reporting Burden

This is a new study/data collection.

A.16. Plans for Tabulation and Publication of Results

Two types of reports will be generated from the study, a set of internal state-level reports and an external cross-state report.

The study team will produce four state-level summary reports for internal use by the Department for informing technical assistance, and by the study team to highlight potential key themes for the final report. The state-level reports will integrate data from the document reviews and interviews with state, district, and school staff.

The study team will also prepare a publicly available final, cross-state report on lessons from the field that will integrate the results from all data collection activities by the research questions in Exhibit 1. The report will identify cross-site lessons, revealing how practices work on the ground, implementation success and challenges, and contextual factors that indicate the applicability of the lessons learned to other states and contexts. An introductory section will briefly describe the participating states, with more demographic details at the state, district, and school levels supplied in an appendix.

Although other themes will likely emerge from the data analysis and reviews by the Department and TWG members, the final report is expected to contain the following sections:

Executive Summary: Summary of the study methods and findings, including a table summarizing key findings from each participating state.

Chapter 1. Background information: The background and need for the study, including the context and interest in KEA assessments, the research questions, and the research design and analysis methods; short description of each participating state’s demographic profile and KEA implementation status.

Chapter 2. Selection, development, and characteristics of KEAs: A description and analysis of cross-cutting themes and variations in selection and adoption of the KEA that considers how and why the KEA was selected, modified, or created; the goals of using the KEA; the engagement of stakeholders in the selection process; constructs and content measured and types of evidence collected; validity with diverse learners; piloting and field-testing of the KEA; and funding levels and strategies to implement and sustain the use of KEA.

Chapter 3. Administration of KEAs: An account of the differences and similarities in implementation of the KEAs across states. This includes training for administrators and teachers on KEA administration, timing of the assessments, who collects data, amount of time spent on KEA administration, involvement of families in providing data, use of technology for implementation, strategies for accurately assessing diverse learners, consistency of use of KEA across districts and states, process for submitting KEA data to the state, and monitoring of KEA implementation.

Chapter 4. Use of KEA data: Comparisons and contrasts in the ways results are shared with various stakeholders, training and supports for using KEA results, the use of data by various stakeholders, and safeguards to prevent misuse.

Chapter 5. Lessons learned and implications: Lessons learned from the case studies on state, district, and school efforts to implement KEAs, including factors that supported and hindered KEA implementation, and implications for policymakers, the Department’s technical advisors, assessment developers, early childhood administrators and teachers, and researchers, and next steps in KEA implementation.

To be useful to busy district leaders and state policymakers, the report will be practical and user friendly. Key findings will be distilled, stated clearly and succinctly, and supported with sufficient information to give readers confidence in the findings without overwhelming them with too many details and examples. Practical information such as challenges specific cases faced and their attempts to overcome them and how KEA data were used to inform policy and practice will also provide practitioners with a range of ideas to consider for their own districts and schools. The report will be posted on the Department of Education website. Also, the Department and SRI will identify other possible channels for dissemination. The planned release date for the final report is April 8, 2016.

A.17. Expiration Date Omission Approval

Not applicable. All data collection instruments will include the OMB data control number and data collection expiration date.

A. 18. Exceptions

Not applicable. No exceptions are requested.


References

Golan, S., Petersen, D., & Spiker, D. (2008). Kindergarten assessment process planning report. Menlo Park, CA: SRI International.

Howard, E. C. (2011). Moving forward with kindergarten readiness assessment efforts: A position paper of the Early Childhood Education State Collaborative on Assessment and Student Standards. Retrieved from http://www.ccsso.org/Resources/Publications/
Moving_Forward_with_Kindergarten_Readiness_Assessment_Efforts.html

Schilder, D., & Carolan, M. (2014). State of the States policy snapshot: State early childhood assessment policies. New Brunswick, NJ: Center on Enhancing Early Learning Outcomes.






File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAuthorised User
File Modified0000-00-00
File Created2021-01-26

© 2024 OMB.report | Privacy Policy