HSDRS_OMB Part A_rev_121813_AMX

HSDRS_OMB Part A_rev_121813_AM.DOCX

Evaluation of the Head Start Designation Renewal System

OMB: 0970-0443

Document [docx]
Download: docx | pdf


Shape1


Evaluation of the Head Start designation renewal system



OMB Information Collection Request

New Collection


Supporting Statement

Part A

August 2013

Submitted By:

Office of Planning, Research and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


7th Floor, West Aerospace Building

370 L’Enfant Promenade, SW

Washington, D.C. 20447


Project Officers:


Amy Madigan








Shape2


Contents


JUSTIFICATION

Appendix A: Conceptual Model and Description of the Head Start Designation Renewal System

Appendix B: Independent Quality Instruments

Appendix B1: Classroom Observation Instruments

Appendix B2: Quality Management Assessment

Appendix B3: Health and Safety

Appendix B4: Secondary Data to Assess Financial Health

Appendix C: Quality Measures Follow Up Interview: Teachers

Appendix D: Quality Measures Follow Up Interview: Center Directors

Appendix E: Quality Measures Follow Up Interview: Program Directors

Appendix F: DRS Telephone Interview: Program Directors

Appendix G: DRS In-Depth Interview: Agency Directors

Appendix H: DRS In-Depth Interview: Program Directors

Appendix I: DRS In-Depth Interview: Policy Council/Governing Body

Appendix J: DRS In-Depth Interview: Program Managers

Appendix K: Competition In-Depth Interview: Agency and Program Directors

Appendix L: Competition In-Depth Interview: Policy Council/Governing Body

Appendix M: Competition In-Depth Interview: Program Managers

Appendix N: Competition Data Capture Sheet

Appendix O: Recruitment and Consent

Appendix O1: Grantee-Level Recruitment for On-Site Assessments (RQ1)

Appendix O2: Center-Level Recruitment for On-Site Assessments (RQ1)

Appendix O3: Teacher Consent Form for On-Site Assessments (RQ1)

Appendix O4: Telephone Interview Recruitment (RQ2)

Appendix O5: Recruitment for DRS In-Depth Interviews (RQ2)

Appendix O6: Recruitment for Competition In-Depth Interviews (RQ3)

Appendix O7: Evaluation of the Head Start DRS Frequently Asked Questions

Appendix P: 60-Day Federal Register Notice

Appendix Q: Certificate of Review from the Institutional Review Board (IRB) of the Urban Institute

Appendix R: Certificate of Review from the Institutional Review Board (IRB) of the University of North Carolina-Chapel Hill

Appendix S: Staff Confidentiality Pledges



A1. Necessity for the Data Collection


The Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval to conduct an evaluation of the Head Start Designation Renewal System (DRS). The purpose of the evaluation is to understand if the Head Start Designation Renewal System is working as intended, as a valid, reliable, and transparent method for identifying high-quality grantees that can receive continuing five-year grants without competition (versus those that are not high-quality and have to compete for renewed funding) and as a system that encourages overall quality improvement. It also seeks to understand the circumstances in which it works more or less well, and the contextual, demographic, and program factors and program actions associated with how well the system is working.


Study Background


Since the program’s inception in 1965, Head Start grantees have typically been given grant awards with an indefinite project period. That is, after they competed for the initial award, grantees submitted non-competitive continuation applications for subsequent budget periods. Under the Improving Head Start for School Readiness Act of 2007, Head Start grants will be awarded for a five-year period. Furthermore, the Act required the U.S. Department of Health and Human Services to develop a system for “designation renewal” in order to identify grantees that are delivering high-quality, comprehensive services to the children and families in their communities and thus would be eligible for another five-year non-competitive grant. If they are not delivering these high quality services, grantees are denied automatic renewal of their grant and must apply for continued funding through an open competition process.


Building upon the new legislative mandate in Section 641 of the Head Start Act, the ACF published a rule which became effective in December 2011 laying out the details of the DRS. The core of the DRS is a set of seven conditions designed to assess whether existing grantees are delivering high-quality, comprehensive services. The DRS incorporates some existing and some newer oversight mechanisms into a single system with the goal of supporting program planning and quality improvement, and establishing a process for introducing competition in places where Head Start grantees are underperforming. The specific conditions include:

  1. A deficiency (i.e., a systemic or substantial failure) in meeting program performance standards resulting from a triennial, follow-up or unannounced monitoring review.

  1. Classroom Assessment Scoring System (CLASS) scores below a minimum threshold or in the lowest 10% in Emotional Support, Classroom Organization, or Instructional Support.

  1. Failure to establish, analyze and utilize program-level school readiness goals.

  1. Revocation of state or local license to operate.

  1. Suspension by ACF.

  1. Debarment from receiving funding from another Federal or State agency or disqualification from participation in the Child and Adult Care Food Program (CACFP)

  2. Determination from an annual audit of being at risk for failing to continue functioning as a “going concern”.


If a grantee meets any one of these conditions, it is designated for competition. To date, most designations for competition have been due to either the deficiency or CLASS conditions.


The intended purpose of the DRS is to ensure “that children and families get the highest quality services possible.” (ACF, 2011). The system is conceptualized as promoting quality improvement in Head Start through two possible mechanisms—incentivizing all grantees to make quality improvements (and thereby avoid competition) and replacing lower quality services with higher quality services in communities where grantees are designated for competition.


The Office of Head Start (OHS) announced the first cohort of grantees designated to compete for continued funding in December 2011 (ACF press release, December 19, 2011). Those 132 grantees, representing Cohort 1 of DRS, had met the DRS deficiency condition over a time span of two and a half years (June 2009-November 2011). The full complement of DRS conditions was applied beginning in December 2011. The second cohort of DRS included a total of 122 grantees meeting one or more of the seven conditions between December 2011 and September 2012 and were thus designated for competition. DRS Cohort 3 will include grantees meeting one or more of the seven conditions between approximately October 2012 and September 2013 with grantees expected to be notified in spring 2014 whether they will be required to compete. DRS Cohort 4 will include grantees meeting one or more of the conditions between approximately October 2013 and September 2014. Grantees either are designated to compete for their next 5-year grant or receive a 5-year grant through a noncompetitive application. By 2016, all Head Start grants are expected to be operating on a five-year cycle and determinations regarding the renewal of those grants at the end of the five-years will be based on the DRS.


Given the large scale of the change introduced by the DRS, it is critical to examine its implementation and how the system is meetings its goals of transparency, validity, reliability, and overall program quality improvement. In particular, there is a need to examine the validity of the DRS conditions, and designation as a whole, by looking at how the new DRS conditions relate to other independent measures of program quality. It also is critical to understand the mechanisms by which the DRS might affect program quality by learning more about local grantee understanding of and responses to the provisions of the DRS and examining the outcomes of grants competitions.


To address these needs for information, in the Fall of 2012, the Office of Planning, Research and Evaluation (OPRE) in ACF awarded a contract to the Urban Institute (with subcontractor Frank Porter Graham Child Development Institute at the University of North Carolina-Chapel Hill) to design and execute an evaluation of the DRS. The evaluation aims to improve understanding of this major policy shift in Head Start and inform future decision making related to the implementation of the DRS.


The study will employ a mixed-methods design that integrates and layers administrative data and secondary data sources, observational assessments, and interviews to develop a rich knowledge base. The study proposes to use classroom observations and organizational assessments, teacher and director interviews, and health and safety checklists to collect data on program quality in approximately 560 classrooms and 300 centers in 70 grantees in the spring of 2014 in order to test the validity of the DRS conditions and measures in assessing program quality and identifying higher- and lower-quality grantees. In addition, interviews (phone interviews and semi-structured on-site interviews) with Head Start program directors and other key respondents, conducted in the spring and fall of 2014, will be used to learn more about how the DRS is working and grantee actions and responses to the DRS and the competitive process. Finally, the study proposes to collect and analyze summary data about the organizations applying for competitive grants in 2014 in order to further understand the nature of the competition prompted by the DRS.


Legal or Administrative Requirements that Necessitate the Collection


This is a discretionary data collection that falls under the authority of 42 U.S.C 9844, section 649 of the Head Start Act, as amended.


A2. Purpose of Survey and Data Collection Procedures


Overview of Purpose and Approach


The evaluation purpose is to understand if the DRS works as intended as a valid, reliable, and transparent method for identifying high-quality grantees eligible for non-competitive five-year grants (versus those that are not high-quality and have to compete for renewed funding) and as a system that encourages overall quality improvement. The evaluation also seeks to understand the circumstances in which it works more or less well, and the contextual, demographic, and program factors and program actions associated with how well the system is working. In this way, the evaluation aims to improve understanding of this major Head Start policy shift and to inform future decision-making about the implementation of the DRS.


The study will employ a mixed-methods design that integrates and layers administrative data and secondary data sources, observational assessments, and interviews to develop a rich knowledge base about how the DRS is working and the circumstances where it works more or less well.


Research Questions


The DRS is conceptualized as promoting quality improvement in Head Start through two possible mechanisms—incentivizing all grantees to make quality improvements (to avoid competition) and replacing lower quality services with higher quality services through the competitive process. As indicated in the conceptual framework (Appendix A), to do this effectively the system must ensure that the indicators used to identify grantees for competition are valid, as well as sensitive enough to differentiate between lower and higher quality grantees. It is also reliant on how grantees perceive the system and respond to it during the quality assessment process (before designation), as well as after designation and during the competition process. Thus, the following three research questions motivate this study:

  1. How effective is the DRS in identifying higher and lower quality Head Start grantees?

  2. How have Head Start grantees understood and responded to the provisions of the DRS in terms of their efforts to improve program operations and quality?

  3. What does competition look like and how do programs respond in communities where Head Start grantees are designated for competition?


Introduction to the Literature Informing the Evaluation Design


Findings from the Head Start Impact Study (HSIS) have shown that Head Start is effective in delivering the intended services to disadvantaged children and families and improving child outcomes after the completion of Head Start (ACF, 2010). However, the findings also indicate that classroom quality and adherence to Head Start program standards varied across participating programs (ACF, 2010). This study reemphasized concerns expressed by GAO in 2005 that some low-performing grantees continued to operate Head Start grants despite documented low performance because their grant awards were not competed. The DRS was required by Congress, largely in response to these concerns. Therefore, it is important to determine the extent to which the DRS is working as intended to identify higher- versus lower-performing grantees, to incentivize efforts to improve program quality, and to replace lower-performing grantees through competition.


Research Question 1 (RQ1) assesses the extent to which the DRS correctly identifies lower and higher quality grantees. That is, it examines the validity of the DRS and its various conditions. Trochim and Donnelly (2008) define validity as “the best available approximation to the truth of a given proposition, inference, or conclusion” (p. 20). Furthermore, Hatry and Newcomer (2010) indicate that measurement validity refers to the question, “Are you accurately measuring what you intend to measure?” (p.558). The issue of measurement/construct validity is at the heart of the evaluation of the DRS.


Research Questions 2 (RQ2) examines how the DRS may induce actions that contribute to quality improvement among grantees trying to avoid designation for competition. Previous research suggests that early care and education program director responses to program standards vary in concert with a range of contextual factors and with how the requirements and related incentives for quality improvement are perceived (Rohacek et al. 2010). Thus, RQ2 is designed to assess grantee understanding of the DRS provisions and to explore the types of quality-improvement activities reported to take place as a result of the DRS. This is important in the DRS evaluation as it captures data on the activities undertaken by grantees to improve quality (intermediate outcomes of the DRS) and explores the connection of those choices to the DRS.


Research Questions 3 (RQ3) examines how the competitive process associated with the DRS may facilitate quality improvement in Head Start. It examines the extent to which DRS introduces competition and attracts applicants for grants in communities where competition occurs (i.e. the level of competition). It also looks at grantee and new awardee perceptions and experiences with the competitive process. The DRS introduces a form of competition that Kincaid (1991) refers to as mediated competition – competition that is initiated and decided through the institutions of government, rather than through the market. Previous research suggests that mediated competitions generate little competition, but facilitate the formation of collaborative community partnerships and/or increase access to additional resources (Hefetz & Warner, 2011; Warner & Hefetz, 2003). Thus, RQ3 examines the extent to which the DRS generates competition in communities and how agencies or organizations in those communities offer competitive proposals through collaborations, additional resources, or other quality improvement strategies.


Study Design


The study will employ a mixed-methods design that integrates and layers administrative data and secondary data sources, observational assessments, and interviews to develop a rich knowledge base to address the three research questions. The evaluation approach utilizes both quantitative and qualitative data collection strategies: quantitative data in the form of observational assessments (RQ1) and reports of the key characteristics of competitors in the grant competition (RQ3); and qualitative data in the form of telephone interviews and site visits (RQ2 and RQ3). The site visits conducted for RQ2 and RQ 3 allow for data collection from multiple members of the same organization. This is important because Head Start has many layers of management, directors, and governance members who share decision-making and resource allocation decisions that could influence that reactions and actions to the DRS. Figure A-1 illustrates the samples and data collection methods for each part of the study as described below.


The first part of the evaluation involves selecting a sample of 70 grantees (Sample A), with half forecasted to be designated for competition and half not designated, to examine the validity of the DRS (RQ1). We will sample the 70 grantees from the 434 grantees with center-based Head Start classrooms that are to be monitored in Fall 2013-Spring 2014. We will randomly select an average of 8 classrooms within each grantee (fewer classrooms in smaller grantees and more classrooms in larger grantees) to participate in observations of classroom quality. The centers of the selected classrooms will be included in the evaluation to measure issues that might lead to deficiencies related to standards pertaining to health and safety, family engagement, and management. At the grantee level, we will examine plans for assessing child progress and financial integrity. Measures of quality improvement and technical assistance will be collected at the center and grantee level. These measures are described below and in Appendices B-E.


The second part of the evaluation involves conducting qualitative interviews in 35 of the 70 grantees and addresses RQ2 (Sample B). This study uses phone interviews with Head Start program directors to explore how grantees understand the DRS, perceive their relationship to the DRS, and perceive the likelihood of competition (Appendix F). It also uses in-depth interviews performed on-site with 15 of the 35 grantees selected for the telephone interviews (Sample C). These in-depth interviews will be conducted with Head Start program directors, agency directors (if different than program directors), program managers (like education coordinators, health coordinators, etc.), and policy council and governing board members to further explore understanding and perceptions of the DRS and actions to improve quality (Appendices G-J).


Finally, the third part of the evaluation collects information about the characteristics of competitors using the Competition Data Capture Sheet (CDCS; Appendix N) during the Spring 2014 grant competition for DRS Cohort 3 (Sample D). It also conducts case studies with 9 awardees (Sample E) of this competition in Spring 2015 (Appendices K-M). The 9 awardees are comprised of incumbent and new awardees (as described in Appendix A). Interviews will be conducted with Head Start program directors, agency directors (if different than program directors), program managers (like education coordinators, health coordinators, etc.), and policy council and governing board members to understand how organizations approached the decision to compete, and the facilitators and challenges they faced in the competition process.

Figure A-1: Illustration of Evaluation Samples and Overview of Data Collection Methods


Shape3 Questions about Validity and Question about Competition and

Understanding of the DRS Responses to Competition

(Research Questions 1 and 2) (Research Question 3)

Shape4 Shape5

Sample A:

On-site quality measurement to test validity of DRS designation

(Spring 2014)


70 Grantees

(programs monitored in Fall 2013-Spring 2014)


  • Select half likely to be designated for competition and half not likely to be designated.

  • Stratify by geographic region, size and predicted likelihood of designation.


Sample D:

Capture sheet regarding characteristics of applicants

(Spring 2014)


Total Population of Applicants

(estimated 500, DRS Competition Cohort 3)







Shape6





Shape8 Shape7

Sample E:

On-site interviews with awardees regarding process of applying for Head Start grant

(Spring 2015)


9 Awardees

(subset of Sample D)


  • Include 5 incumbent grantees and 4 new awardees.

  • Select sample purposively to maximize diversity of characteristics such as program size (funded enrollment), rural/urban status, auspice, region, reason for designation for competition, and previous relationship to Head Start.




Shape11 Shape9 Shape10

Sample C:

On-site interviews with a range of stakeholders regarding DRS process and program improvements

(Fall 2014)


15 Grantees

(subset of Sample B)


  • Select half likely to be designated for competition and half not likely to be designated.

  • Select sample to ensure diversity in size, geography, organizational type, and reactions to the DRS as expressed in the DRS Telephone Interview: Program Directors

Sample B:

Telephone interviews with directors to learn about experiences with DRS

(Spring 2014)


35 Grantees

(subset of Sample A)


  • Select half likely to be designated for competition and half not likely to be designated.

  • Select sample to ensure diversity in size, geography, organizational type and structure.



Data collected for this evaluation will focus on center-based Head Start grantees subject to a monitoring review in 2013-2014 (some of which will be designated for competition as part of DRS Cohort 4), applicants for new grants in DRS Cohort 3, and awardees of the DRS Cohort 3 grant competition. The evaluation excludes grantees that offer only Migrant and Seasonal Head Start (MSHS), American Indian/Alaskan Native Head Start (AIAN), stand-alone Early Head Start (EHS), and interim grantees. There are several reasons for this focus: First, the DRS CLASS condition is used in center-based programs serving preschool-aged children only (i.e., not in EHS or home-based programs). Second, MSHS grantees are a small proportion of programs, facing challenges that are somewhat different than the typical Head Start program making findings about them not widely generalizable. Third, compared to other Head Start grantees, AIAN grantees are subject to different policies and processes related to DRS and competition as required by the Head Start Act. Finally, interim grantees are operating on a temporary basis and are not subject to the DRS.


Each primary data collection effort will be linked with administrative and secondary data to explore the circumstances in which the DRS works more or less well, including the program characteristics and community characteristics. Table A-1 illustrates which strategies and instruments will be used in answering which research questions. Additional details for the approach to each research question are provided below and in Supporting Statement B.


Table A-1: Instruments and Secondary Data Sources, Type of Administration, Frequency and Purpose

Instrument

How instrument is administered

When instrument is administered

Overall

Goal of Instrument

RQ1: How effective is the DRS in identifying higher and lower quality Head Start grantees?

Classroom Assessment Scoring System: CLASS

(Appendix B1.1)

Observation of 560 classrooms in grantees monitored in 2013-2014

Spring 2014

This collection of the CLASS will provide an independent measure of CLASS from DRS monitoring to ensure we have CLASS scores in the same grantees at the same time as the other measures are collected.

Early Childhood Environmental Rating Scale-Revised: ECERS-R

(Appendix B1.2)

Observation of 560 classrooms in grantees monitored in 2013-2014

Spring 2014

The ECERS-R will serve as an independent measure of CLASS Emotional Support with the 22-item Interactions and Space and Furnishing factors.

Early Childhood Environmental Rating Scale – Extension: ECERS-E

(Appendix B1.3)


Observation of 560 classrooms in grantees monitored in 2013-2014

Spring 2014

The ECERS-E will serve as an independent measure of CLASS Instructional Support.

Teacher Styles Rating Scale: Adapted TSRS

(Appendix B1.4)

Observation of 560 classrooms in grantees monitored in 2013-2014

Spring 2014

The Adapted TSRS Subscale of Classroom Structure and Management will serve as an independent measure of CLASS Classroom Management.

Health & Safety Checklist

(Appendix B2)

Observation of 560 classrooms and 300 centers in grantees monitored in 2013-2014

Spring 2014

This checklist will serve as an independent measure of the construct of child health and safety.

Program Administration Scale: PAS

(Appendix B3)

Observation and document review of 300 centers in grantees monitored in 2013-2014

Spring 2014

Several subscales of the PAS (see Table A-2) will measure family and community engagement, child development and education, classroom quality, financial integrity/vulnerability, and management, operations, and governance systems constructs.

Quality Measures Follow Up Interview: Teachers

(Appendix C)

Interviews with teachers in 560 classrooms observed for ECERS-R and ECERS-E

Spring 2014

This instrument provides additional information on items that could not be sufficiently observed during the ECERS-R and ECERS-E data collection.

Quality Measures Follow Up Interview: Center Directors

(Appendix D)


Interviews with 300 Center Directors where PAS and Health & Safety Checklist administered

Spring 2014

This instrument provides information on items that could not be sufficiently observed during the PAS and Health & Safety Checklist. It also collects information about the technical assistance and training efforts of the grantee and demographic data on the children, teachers, and director in the center.

Quality Measures Follow Up Interview: Program Directors

(Appendix E)

Interviews with 70 Program Directors where PAS and Health & Safety Checklist administered

Spring 2014

This instrument provides additional information on items that could not be sufficiently observed during the PAS data collection. It also collects information about the technical assistance and training efforts of the grantee.

Administrative Data from OHS: Program Information Report (PIR)

NA

NA

Data on grantee characteristics will be linked to the independent measures of quality and used to examine how designation status and the validity of a designation rating may vary by grantee characteristics.

Administrative Data from OHS: Historical Monitoring Data

NA

NA

These data will be used to compare the results of the independent measures of quality to the historical monitoring information.

Administrative Data from OHS: Designation Status Data

NA

NA

These data will be used to compare the results of the independent measures of quality to the designation status results.

Secondary Data: Census

NA

NA

Data on characteristics of grantees’ communities will be linked to the independent measures of quality and used to examine how designation status and the validity of a designation rating may vary by community characteristics.

Secondary Data: National Center of Charitable Statistics Form 990 IRS Nonprofit Data

NA

NA

These data will be used to provide an independent measure of financial vulnerability through a calculation of Tuckman & Chang ratios to contribute to validation of the financial integrity/vulnerability construct.

RQ2: How have Head Start grantees understood and responded to the provisions of the DRS in terms of their efforts to improve program operations and quality?

DRS Telephone Interview: Program Directors

(Appendix F)

Phone interview with 35 Program Directors in grantees monitored in 2013-2014

Spring 2014

The purpose of these phone interviews is to learn from Head Start grantees about their experiences with and responses to the DRS.

DRS In-Depth Interview: Agency Directors

(Appendix G)

On-site interview in 15 grantees monitored in 2013-2014

Fall 2014

These interviews will be conducted during site visits and will gather data on grantees’ perceptions of and actions taken as a result of the DRS. These interviews will delve more deeply into information gathered in the DRS Telephone Interviews (Appendix G) and will provide the perspectives of individuals at multiple levels (agency and program directors, managers, and governing bodies).

DRS In-Depth Interview: Program Directors

(Appendix H)

On-site interview in 15 grantees monitored in 2013-2014

Fall 2014

DRS In-Depth Interview: Policy Council/ Governing Body

(Appendix I)

On-site, small group interview in 15 grantees monitored in 2013-2014

Fall 2014

DRS In-Depth Interview: Program Managers

(Appendix J)

On-site, small group interview in 15 grantees monitored in 2013-2014

Fall 2014

Administrative Data from OHS: Program Information Report (PIR)

NA

NA

These data will be linked to the qualitative interview data to assess grantee characteristics in relation to themes emerging from the data analysis.

Secondary Data: Census

NA

NA

These data will be linked to the qualitative interview data to assess grantee community characteristics in relation to themes emerging from the data analysis.

RQ3: What does competition look like and how do programs respond in communities where Head Start grantees are designated for competition? 

Competition In-Depth Interview: Agency and Program Directors

(Appendix K)


On-site interview in 9 DRS Cohort 3 awardees

Spring 2015

The interviews conducted on these site visits will gather information about Head Start grantees’ (incumbents and new awardees) perceptions of and actions taken related to the competitive process associated with the DRS. This information will be gathered at multiple levels within the Head Start grantee, including from agency and program directors, program managers, and governing bodies.

Competition In-Depth Interview: Policy Council/ Governing Body

(Appendix L)


On-site, small group interview in 9 DRS Cohort 3 awardees

Spring 2015

Competition In-Depth Interview: Program Managers

(Appendix M)


On-site, small group interview in 9 DRS Cohort 3 awardees

Spring 2015

Competition Data Capture Sheet

(Appendix N)

Self-complete form accompanying Head Start grant applications (about 500 applicants)

Spring 2014

This form captures information about the organizations that respond to the competitive application process for DRS Cohort 3.

Secondary Data: Census

NA

NA

These data will be linked to the qualitative interview data to assess grantee community characteristics in relation to competition themes emerging from the data analysis.


RQ1: How effective is the DRS in identifying higher and lower quality Head Start grantees?


In order to assess the validity of the DRS in classifying grantees into higher and lower quality, the evaluation compares the conditions and measurement of the DRS conditions to independent measures of quality. For the purposes of this evaluation, the DRS conditions have been aligned with constructs that represent broad areas of Head Start quality reflective of how those areas are represented within Head Start monitoring (child health and safety, family and community engagement, child development and education, classroom quality, management operations and governance, and fiscal integrity and vulnerability). Independent measures of those quality constructs were then selected for use in the evaluation based on their ability to validate the DRS conditions in relation to the type of quality that the particular DRS conditions are designed to address. The crosswalk of these quality constructs, DRS conditions, and independent measures is presented in Table A-2. As the table indicates, some DRS conditions – like monitoring deficiency – cross over many quality constructs. The deployment of each of the independent measures in Table A-2 is described in Table A-1, and more information about the measures is provided in the Universe of Data Collection Instruments section.


Table A-2: Crosswalk of Quality Constructs, DRS Conditions, and Independent Measures

Quality Construct

DRS Condition

Independent Measures

Child Health & Safety

  • Access to health and dental care

  • Screening and referrals

  • Safe physical environments

  • Healthy practices and routines

  • Appropriate group sizes

  • Transportation and supervision

  • Nutrition, provision of meals


  • Monitoring Deficiency

  • Licensing Revocation



  • NAEYC Childcare Health and Safety Checklist

  • California Childcare Health Program Health and Safety Checklist

  • Observation Coversheet - recording of child-staff ratio and group size

Family & Community Engagement

  • Partnerships with families

  • Supporting family needs

  • Parent-child relationships

  • Parents as their child’s educators

  • Family literacy

  • Supporting parents in children’s transitions

  • Community partnerships



  • Monitoring Deficiency

  • School Readiness Goal Requirement Noncompliance



  • PAS Family Partnerships Subscale

Child Development & Education

  • Setting and using school readiness goals

  • Curriculum selection and implementation

  • Individualizing

  • Support for children with disabilities

  • Culturally, linguistically responsive

  • Teacher/staff qualifications



  • Monitoring Deficiency

  • School Readiness Goal Requirement Noncompliance


  • PAS Child Assessment Subscale

  • Observation Coversheet - report of use of curriculum




Classroom Quality

  • Emotional Support

  • Classroom Organization

  • Instructional Support


  • Monitoring Deficiency

  • Low CLASS Scores


  • CLASS

  • ECERS-R

  • ECERS-E

  • Adapted TSRS

  • PAS Staff Qualification Subscale

Management, Operations & Governance Systems

  • Program planning

  • Ongoing monitoring

  • Human Resources

  • Communication

  • ERSEA (Eligibility, Recruitment, Selection, Enrollment and Attendance)

  • Record keeping and reporting

  • Data driven decision making

  • Governing Board and Policy Council



  • Monitoring Deficiency

  • License Revocation

  • ACF Grant Suspension



PAS Subscales:

  • Program Planning & Evaluation

  • Center Operations

  • Human Resources Development

  • Marketing & Public Relations

  • Personnel Cost & Allocation

Financial Integrity / Vulnerability

  • Financial management systems

  • Accounting practices

  • Appropriate expenditures, costs and purchasing

  • Failure to maintain a going concern



  • Monitoring Deficiency

  • ACF Grant Suspension

  • Federal Funding Debarment or Disqualification

  • Audit Finding


  • PAS Fiscal Management Subscale

  • Tuckman & Chang (1991) Financial Ratios using IRS Form 990 data


Our evaluation of the DRS focuses heavily on two of the seven conditions that can trigger competition for grantees because those two conditions placed grants in the competitive renewal process over 99% of the time. Although we include some measure of all seven conditions, Monitoring Deficiencies and Low CLASS Scores are the primary focus of our evaluation.


In 2012, 123 grantees were designated for competition in the DRS. Table A-3 below shows which conditions triggered designation for those grantees and indicates 99% of grantees designated for competition were designated based on monitoring deficiencies and/or CLASS scores. For this reason, the DRS evaluation primarily focuses on these two conditions.


Table A-3: 2012 Findings by DRS Criteria



DRS Criteria

# of Grantees

%

Monitoring Deficiencies Only

74

61%

CLASS Only (classroom quality)

43

36%

Monitoring Deficiencies and CLASS

3

2.5%

License Revocation Only

1

<1%

*2 migrant/seasonal grantees designated for competition are not included in the counts


Thus, the current proposed data collection will include independent assessment of all seven criteria that could lead to designation, but predominantly focuses on measuring constructs related to CLASS and monitoring deficiencies because those two criteria clearly dominate what triggers a program being designated for competition. The data collection includes several measures of classroom quality to focus on the CLASS criterion and measures of management systems, fiscal integrity, child health and safety, education and early learning, and family and community partnerships, to focus on the deficiencies criterion. (See Table A-2 for a depiction of how the various constructs are measured by OHS as part of the DRS and the corresponding measures proposed for this data collection in the “independent measures” column.)


In Spring of 2014, prior to grantee knowledge of their DRS status, the evaluation will conduct on-site observational assessments and follow up interviews at the grantee, center and classroom levels of a sample of 70 grantees to assess the validity of the DRS measures. This data collection will directly follow the Fall 2013-Spring 2014 monitoring and CLASS assessments conducted by OHS from which DRS designation status will be decided by OHS (i.e., within a month or two of the OHS monitoring visit). The proximity in time of the data collection is important to preserve the integrity of the validation process (i.e., assessments of validity are sensitive to gaps in time). Thus, the results of the independent measures of quality collected by the evaluation team may be compared to the results of the assessments conducted by OHS monitoring to examine the validity of OHS assessments and resulting determinations about which grantees will be designated to compete for renewed funding.


The appropriateness and psychometric soundness of the independent measures in Table A-2 for assessing quality in early childhood programs has been demonstrated in many studies. A summary of that information is provided here.


The Early Childhood Environment Rating Scale-Revised (ECERS-R) has proven to be a reliable measure of global classroom quality. Studies using factor analytic techniques have identified two main factors within the ECERS-R. The first factor, Interactions, relates to how teacher behavior supports children’s development and the second factor, Space and Furnishings, relates to the space and materials and how well conducive they are to learning. The ECERS-R has demonstrated good inter-rater reliability at the indicator, item, and total scale level. ; Percent agreement is 86.1% across all 470 indicators, there is a 71% within one point agreement across items. Furthermore, agreement between two observers has a Pearson product correlation of .921 and a Spearman correlation of .865. The ECERS-R has a total scale internal consistency of .92 an intraclass correlation of .915 and finally, an internal consistency score of .92 (Harms, Clifford, & Cryer, 2005). A study by Cassidy, Hestenes, Hedge, Hestenes, and Mims (2005) found that factor 1, Interactions, had an internal consistency of .81 and factor 2, Space and Furnishings, had an internal consistency of .87 (Clifford, Reszka, & Rossbach, 2010).


In practice, quality measures such as the ECERS-R, the Early Childhood Environment Rating Scale-Extension (ECERS-E), and the CLASS are often used together to obtain a comprehensive view of classroom quality. One example includes a study conducted by Mashburn and colleagues (2008) in which researchers examined classroom quality and children’s academic development. Using a large and diverse sample of 2,439 children in 671 preschool classrooms in 11 states researchers found that the ECERS-R was positively associated with children’s expressive language development. Teacher’s instructional interactions, as captured by the ECERS-R predicted academic language skills. Using the CLASS, findings revealed that higher quality instructional interactions were positively associated with language and social skills. Higher quality emotional interactions were associated with children’s social competence and lower reports of problem behaviors (Mashburn et al., 2008). In another study Denny, Hallam, and Homer (2012) examined classroom quality in a sample of 114 early education programs licensed by the Tennessee Department of Human Services. Researchers used a combination of measures to assess quality, specifically, the ECERS-R, ECERS-E, and the CLASS. Denny and colleagues found that the three measures were highly correlated with one another. Strong positive correlations between scores were found indicating that when a classroom scored high on one measure it also typically scored high on another with some exceptions. The sample of 114 classrooms yielded an average ECERS-R score of 4.41. The classrooms also scored high on the Emotional Support and Interactions subscale of the CLASS. Interestingly, these same classrooms also scored low on the ECERS-E indicating weakness in instruction and curriculum. Multiple measures are proposed for two reasons. First, we tried to identify independent quality measures that correspond as closely as possible to the three domain scores of the DRS classroom quality measure, CLASS. Second, multiple measures are necessary when the underlying construct, such as classroom quality, has multiple dimensions that are not necessarily highly correlated (Denny et al., 2012)


A newer classroom quality instrument that has been used in several Head Start studies is the Adapted Teacher Style Rating Scale (Adapted TSRS). The Adapted TSRS is an instrument that aligns with the CLASS. Bierman et al. (2008) used the original TSRS as a compliment to the CLASS in the Head Start REDI program because it focuses on the behavior a specific teacher. TSRS scores were found to be in agreement with CLASS scores 93% of the time. An expanded version of the TSRS instrument incorporating additional subscales, which will be used for this study, was created for The Head Start CARES study (Raver et. al., 2012).


The Program Administration Scale (PAS) is a less frequently used measure to describe quality at the center, rather than the classroom level. Descriptive statistics have confirmed that the PAS has an acceptable distribution of scores; two samples were used to obtain reliability and validity data. Tests for internal consistency revealed a Cronbach’s alpha of .85 for sample one and .86 for sample two, indicating acceptable internal consistency among the items for both samples. The 10 subscales were correlated to measure distinctiveness; subscale intercorrelation ranged from 0.9-.63 with a mean of .33 for sample one and ranged from .04-.72 with a mean of .33 for sample two. Interrater reliability within 1 point was 90% for sample one and 94% for sample two. Finally, the PAS’s concurrent validity was measured by correlating the instrument with the Professional Growth Subscale of the Early Childhood Work Environment Survey (ECWES) and the Parents and Staff subscale of the ECERS-R. The PAS displayed moderate correlations with both measures, .53 with the ECERS-R and .52 with the ECWES (Talan & Bloom, 2011).


In a study conducted by Lower and Cassidy (2007) the relationship between child care program administration, organizational climate, and global quality was examined using 30 preschool centers in North Carolina. Lower and Cassidy found that program administration and organizational climate were both positively correlated with preschool classroom quality and level of education of the director was related to higher quality administrative practices. Using the PAS, Lower and Cassidy found that non-profit centers had higher scores than for-profit centers; the total sample had an average score of 2.87 which indicated quality of administration was less than minimal. Results from this study provided support for a need for further investigation of leadership and management quality in early care settings.


Strengths of This Method


There are several strengths of the proposed design for the validation study. First, all measures were selected because they provide the best independent measurement of the aspects of quality that are being measured by the DRS. Careful attention was paid to identify aligned tools that measure the conditions that are most likely to place grantees in competition – deficiencies in health and safety performance standards and classroom quality. Accordingly, we focused on selecting instruments that measure these two areas as accurately as possible in a manner that was as aligned with the DRS measurement as possible. The two health and safety questionnaires cover many of the monitoring health and safety standards. The three classroom quality measures should provide us with independent measures of instructional quality, emotional support, and classroom management – the three scales on the CLASS that are used in the DRS monitoring. The PAS should provide us with information about the other deficiencies that could result in designation.


Second, data are being collected at the level at which variation can occur. Classrooms can vary within and across centers, and the proposed plan to collect classroom quality measures from more than one classroom in larger centers as well as from multiple centers will reflect that variation. Health and safety standards, family engagement, professional development, and supervision occur at the center level and can vary across centers within grantees.


Third, the data collected will include information about quality improvement (QI) efforts at the grantee level, and therefore, can provide some information about the extent to which grantees that engage in different types of quality improvement and technical assistance efforts are more or less likely to be designated or have higher quality services – overall or especially within the area covered by that QI.

Limitations of This Method


There are limitations to this method as well. First, the monitoring visits that inform the DRS process collect information from many more classrooms and centers per grantee than will be possible with the evaluation. Therefore, the level of confidence in the assessment of quality by the DRS should be much higher than for the evaluation team because the DRS team will have considerably more information. The random selection of classrooms within grantees should not introduce bias within the evaluation relative to the DRS, but the confidence intervals will be larger for the evaluation results than for the DRS monitoring results. Nonetheless, they will be within a range technically acceptable to the field.


Second, there are not parallel measures for some of the major constructs. The CLASS Instructional Support is thought to be low in early childhood settings in general, and therefore might be the most likely CLASS domain score to place a grantee into competition. There are no other quality measures that are highly aligned with this domain of the CLASS measure, and we chose the ECERS-E because it appeared to be the closest. It, however, is moderately correlated with the CLASS Instructional Support. In other domains the measures had higher correlations (e.g., the ECERS Interaction factor and CLASS Emotional Support; the Adapted TSRS Management Scale and CLASS Classroom Management). Similarly, there were not strong measures of family involvement that were tightly aligned with the monitoring assessment used in DRS.


RQ2: How have Head Start grantees understood and responded to the provisions of the DRS in terms of their efforts to improve program operations and quality?


The study design calls for RQ2 to be addressed through two phases of qualitative data collection, both of which will occur before grantees interviewed know their designation status. In the Spring of 2014, telephone interviews will be conducted with 35 of the 70 Head Start program directors whose grantees participated in the observational assessments. The purpose of these interviews will be to collect basic information on how directors understand the provisions of the DRS and how their grantee responded to the DRS.  In Fall 2014, follow-up site visits will be collected with 15 grantees that participated in the phone interviews to more deeply explore their perceptions and actions, and to speak with more individuals responsible for governance and management of the Head Start grantees.


The telephone interview will be guided by a protocol designed for the purpose of this study (see Appendix F).  The same protocol will be used for all grantees and will cover topics such as: how directors interpret the DRS provisions, ways the DRS has affected the grantee, actions taken in response to the DRS, and training and technical assistance needs related to the DRS.  The telephone interview is expected to last an average of 75 minutes per respondent.  The phone interview will yield a high-level overview of the how Head Start program directors understand the DRS and the ways grantees are responding. However, eliciting a comprehensive understanding of grantee responses to the DRS requires building rapport with respondents that may be limited by the telephone mode of interviewing.  Further, grantee responses to the DRS are affected by the understanding and actions taken by grantee leadership and staff beyond the program director.


Thus, a second phase of data collection to address RQ2 involves follow-up site visits to validate the findings from the phone survey, obtain additional details from directors, and observe responses to the DRS from a broader set of Head Start stakeholders.  In this phase we will conduct one- to two-day site visits to 15 grantees included in the observational assessments and telephone interviews. During those visits, we will conduct: 90-minute individual interviews with Head Start program directors (a follow-up to the phone interview); 60-minute individual interviews with agency directors (if different from the Head Start program directors); 90-minute small group interviews with members of the governing body and with members of the Policy Council; and 90-minute small group interviews with Head Start program supervisors and managers. These interviews will be guided by protocols designed for the purpose of this study (see Appendices G-J). The interviews will cover topics similar to those included in the director phone interview, but from the perspective of individuals with different types of responsibility for program implementation.  Qualitative analyses will focus on identifying themes in grantee perceptions and responses to the DRS to describe quality improvement efforts associated with the DRS. Qualitative data will be linked to administrative data sources, such as PIR and census data, to examine how themes vary by grantee and/or community characteristics.


RQ3: What does competition look like and how do programs respond in communities where Head Start grantees are designated for competition?


The study design calls for RQ3 to be addressed through two phases of data collection. One phase will occur in Spring 2014 to assess the level of competition created and the characteristics of agencies and organizations that submit applications in response to the Funding Opportunity Announcements in communities where grants are designated for competition in Cohort 3. The second phase will occur in Spring 2015 to assess the competition experiences of awardees – both incumbents and new awardees.


In Spring 2014, the evaluation will collect data through the CDCS on all agencies or organizations applying for Head Start grants through the competitive process. The CDCS was developed by the research team to reflect elements of the competitive process that the literature indicate may occur differently in different communities such as collaborations, attraction of external resources, and competitor types as indicated by organizational characteristics. It also captures elements of the proposed service delivery to allow for comparison across competitor approaches. These data will be linked to administrative data, such as the PIR and Census data, to document the extent and nature of competition for the sites designated to compete and how that varies by community characteristics. Summary statistics will be generated for the competition as a whole and for competition by grant competed. The evaluation will also examine summary statistics for particular dimensions, like urbanicity that the literature indicates may affect competition, and along particular program dimensions, like service options that may indicate if the competitive process is garnering particular kinds of competitors.


In Spring 2015, following the competitive awards to incumbent grantees or new awardees, nine grantees that engaged in the competitive process will be selected for on-site, in-depth interviews to understand their motivations to participate, and facilitative or challenging elements in the competitive process. On-site interviews will be conducted with program directors, agency directors, representatives of policy councils and governing bodies, and coordinators of program services. These data will be coded for themes based on what the literature indicates is likely to occur and based on emerging themes. The nine sites will be treated as case studies. Thus, analyses will focus on profiling each awardee and the particular characteristics of that awardee, the awardee’s community, their service options and population served, and their experiences. The analysis will also develop cross-case comparisons to identify where and indications of why differences occur.





Strengths and Limitations of the Methods for RQ2 and RQ3


Miles, Huberman and Saldana (2013) point out that, “The main task (of qualitative research) is to describe the ways people in particular settings come to understand, account for, take action, and otherwise manage their day-to-day situations” (p. 9). Thus, the study design calls for these RQ2 and RQ3 to be addressed through primarily qualitative data collection and analysis that will allow for examination of Head Start grantees understand the DRS, are motivated by the DRS, and the actions they have taken to improve quality in response to the DRS.


The proposed in-depth interviews will permit a rich exploration of the perceptions and experiences of Head Start grantee leadership and staff with regard to the DRS and the competitive process. The cross-case approach will allow us to explore differences across grantees and how those differences might be associated with program characteristics and context. The purposive sampling approach is designed to generate information on the broadest possible range of responses to the DRS, which is important for the goal of understanding how the DRS may be affecting the diverse group of Head Start grantees.


The tradeoff is that purposive (non-representative) sampling limits the capacity of the study to reach conclusions about the frequency of particular phenomena in the population of interest, and is not generalizable to the whole population in the way that randomly selected quantitative data can be. On the other hand, it does allow for exploratory analyses that may surface important information about how grantees are experiencing the DRS and the competitive process. In exploring RQ2, the interview data will provide some greater understanding about a subset of the grantees investigated for RQ1. This will provide an opportunity for providing some additional information about the interplay between grantee efforts to improve and their designation for competition. Similarly, the qualitative portion of RQ3 will be paired with the summary statistics from the CDCS which will enable comparison of the actual competition and perceived competition in particular communities, and the extent to which that mattered in perceived incentives.


While the methods used in this evaluation will not allow for conclusions of causality, they will build a preliminary understanding of the mechanisms of the DRS that precipitate efforts to improve quality – changing perceptions and motivations. Additionally, the information in this evaluation can serve as the foundation for future studies that could be designed to assess causal relationships between the DRS and changes in Head Start quality.

Universe of Data Collection Efforts


The instruments to be used for collecting data are as follows:


Instruments Listed in Respondent Burden Table


Instruments for Assessing How Well DRS Differentiates Higher and Lower Quality Grantees (RQ1)



  • Quality Measures Follow Up Interview: Teachers (Appendix C)

This instrument captures follow up items in two classroom observation instruments – the ECERS-R and the ECERS-E – that require interviewing the teacher to ask questions about any indicator that could not be scored during observations. It also captures characteristics of the observed classrooms and teachers through the classroom observation coversheet, including four questions for teachers about their education and race/ethnicity. Further information about the full set of classroom-level quality measures can be found in the “Classroom Quality Measurement Instruments” section below and in Appendix B1.


  • Quality Measures Follow Up Interview: Center Directors (Appendix D)

This instrument includes interview portions of the PAS and the Child Care Health and Safety Checklist (that could not be directly observed), as well as the Technical Assistance and Training Interview, Center Demographic Sheet, and Center Director Questionnaire:

    • Technical Assistance and Training Interview: This interview is designed to inform understanding of the types of professional development and technical assistance supports that grantees use to help in preparing for monitoring. It is conducted at both the grantee and center levels. The following document was referenced in designing this instrument in addition to the technical assistance and professional development experience of the researchers: Early Childhood Education Professional Development: Training and Technical Assistance Glossary (NAEYC and NACCRRA, 2011).

  • Center Demographic Sheet: This instrument captures information about each center where data are collected and is filled out by the data collector before the site visit. It does not have any respondent burden.

  • Center Director Questionnaire: This instrument assesses various program characteristics, including information about children, teachers and center directors themselves. It is administered at the center level in conjunction with the PAS.


Further information about the PAS and Child Care Health and Safety Checklist can be found in the “Other Program Quality Measurement Instruments” section below and in Appendices B2 and B3.


  • Quality Measures Follow Up Interview: Program Directors (Appendix E)

This instrument includes an interview portion of the PAS (items that could not be directly observed), as well as the full Technical Assistance and Training Interview (see description under Quality Measures Follow Up Interview: Center Directors). Further information about the PAS can be found in the “Other Program Quality Measurement Instruments” section below and in Appendix B2.


Instruments Assessing Understanding and Perceptions of the DRS and Efforts to Improve Quality (RQ2). All of these interview instruments were designed specifically for this study.


  • DRS Telephone Interview: Program Directors (Appendix F)

This interview, taking place soon after the quality-related site visits in Spring 2014, asks about perceptions and experiences with the Head Start Designation Renewal System. Further details are available in Appendix F.


  • DRS In-Depth Interview: Agency Directors (Appendix G)

This guide is for a semi-structured on-site interview to collect information from

directors of Head Start grantees on their perceptions of the Designation Renewal System. Further details are available in Appendix G.


  • DRS In-Depth Interview: Program Directors (Appendix H)

This guide is for a semi-structured on-site interview to collect information from Head Start program directors on their perceptions of the Designation Renewal System. Further details are available in Appendix H.


  • DRS In-Depth Interview: Policy Council/Governing Body (Appendix I)

This guide is for a semi-structured on-site group interview to collect information from members of policy councils and governing bodies on their perceptions of the Designation Renewal System. Further details are available in Appendix I.


  • DRS In-Depth Interview: Program Managers (Appendix J)

This guide is for a semi-structured on-site group interview to collect information from education services managers and other services coordinators on their perceptions of the Designation Renewal System. Further details are available in Appendix J.


Instruments for Assessing the Nature of Competition and Program Responses to Competition (RQ3). All of these instruments were designed specifically for this study. The types of questions are drawn from the competition literature of Hefetz and Warner.


  • Competition In-Depth Interview: Agency and Program Directors (Appendix K)

This guide is for a semi-structured on-site interview to collect information from agency directors and Head Start program directors in organizations that competed for and won grants that had been designated for competition through the DRS. The purpose is to understand decisions to compete, competition experiences, and facilitative or challenging elements in the competitive process. Further details are available in Appendix K.


  • Competition In-Depth Interview: Policy Council/Governing Body (Appendix L)

This guide is for a semi-structured on-site interview to collect information from members of policy councils and governing bodies regarding their roles in participating in competition for grants designated for competition through the DRS and to understand facilitative or challenging elements in the competitive process. Further details are available in Appendix L.


  • Competition In-Depth Interview: Program Managers (Appendix M)

This guide is for a semi-structured on-site interview to collect information from education services managers and other services coordinators to understand their roles in the competition process, their motivations to participate, and facilitative or challenging elements in the competitive process. Further details are available in Appendix M.


  • Competition Data Capture Sheet (CDCS; Appendix N)

This instrument will collect succinct data on applicants in Spring 2014 for Head Start grants designated for competition through the DRS process in order to assess the level and type of competition for Head Start grants competed as a result of the DRS. Further details are available in Appendix N.


Observational Assessment Instruments with No Burden


Classroom Quality Measurement Instruments (RQ1)


  • Classroom Assessment Scoring System (CLASS)

The CLASS (Pianta, La Paro, & Hamre, 2008) is organized into three domains: Emotional Support, Classroom Organization, and Instructional Support. We propose to conduct the CLASS in order provide an independent measure of CLASS from DRS monitoring and to ensure we have CLASS scores in the same classrooms at the same time as the other measures are collected. Further details are available in Appendix B1.1.


  • Early Childhood Environment Rating Scale – Revised (ECERS-R)

The ECERS-R (Harms, Clifford & Cryer, 2005) is proposed to align with the CLASS domain of Emotional Support. In order to minimize burden to grantee staff and researchers while maintaining integrity of the instrument’s validity and reliability, use will be limited to 22 items that align to two factors: Interactions and Space and Furnishing. This will allow assessment of teaching and interactions, and programs’ provisions for learning and is the abbreviated version that was used in the Family and Child Experiences Study (FACES) 2009. Further details are available in Appendix B1.2.


  • Early Childhood Environment Rating Scale – Extension (ECERS-E)

The ECERS-E (Sylva, Siraj-Blatchford & Taggart, 2003) aligns with the CLASS Instructional Support domain and assesses literacy, mathematics, science and environment, and diversity. It is being used as originally designed. Further details are available in Appendix B1.3.


  • Teacher Style Rating Scale (Adapted TSRS)

The Adapted TSRS (Raver et al 2012) is proposed to align with the CLASS Classroom Management domain and assesses classroom quality. This recently-updated instrument consists of 45 items in 15 domains, but only the domains related to Classroom Structure and Management will be used. Further details are available in Appendix B1.4.


Other Program Quality Measurement Instruments (RQ1)


  • Program Administration Scale (PAS)

The PAS (Talan & Bloom, 2011) will be used to measure program quality across a variety of constructs corresponding to the conditions triggering a grantee to be designated for competition in the Head Start Designation Renewal System. It contains 25 items grouped into 10 subscales that measure leadership, management, and administrative practices of center based early childhood programs. It consists of a mix of interviews with a program administrator and review of administrative data. It is being used at both the center and grantee levels. Further details are available in Appendix B2.


  • Child Care Health and Safety Checklist (Appendix B3)

This measure is used to assess the procedures in place to insure child health and safety in child care facilities. It combines elements of two previously existing checklists: one developed by the National Association for the Education of Young Children (Aronson, 2002), and another developed for use in California (California Childcare Health Program, 2005) It will be administered at the center level. The instrument and further details can be found in Appendix B3.


  • Tuckman & Chang Financial Vulnerability Ratios

IRS Form 990, obtained through the National Center for Charitable Statistics, will be used in the validation assessment by calculating the Tuckman and Chang (1991) financial ratios as an independent measure of financial stability for grantees that are nonprofit organizations. There are four ratios: Equity Ratio, Revenue Concentration, Administrative Cost Ratio, and Operating Margin. Further details are available in Appendix B4 and Supporting Statement B.


Secondary Data Sources


The research team will connect the primary data collected for the study to secondary data sources like Census and other community information to enrich the analysis. These data will be used to help understand the circumstances in which the DRS is working more or less well.


A thorough review of existing documents will serve as a starting point for identifying information already available and avoiding duplication of data collection efforts. We intend to draw data from the following documents as part of our evaluation:


  • PIR data available through OHS

  • Monitoring data available through the OHS

  • Designation status data available through the OHS

  • Other materials available through federal web-sites (e.g., information memoranda, policy clarifications, and other materials intended to inform grantees or potential applicants about the DRS)

  • Data on nonprofit Head Start organizations available from the Urban Institute’s National Center for Charitable Statistics (i.e., IRS Forms 990).


A3. Improved Information Technology to Reduce Burden


The Quality Measures Follow Up Interview: Teachers, Quality Measures Follow Up Interview: Center Directors, and Quality Measures Follow Up Interview: Program Directors will be computer-assisted interviews (CAI) administered through electronic tablet-based data collection tools. This will improve the efficiency and reduce the burden of the interview process by helping the data collector more quickly identify and target the follow up questions needed.


Whenever possible, information technology will be used in data collection efforts to reduce burden on study participants. To facilitate data collection, entry, and management, all data collection on measuring program quality will be computer assisted. Electronic tablet-based data collection tools will be used to record the data during observations and transmit it to the study team’s database once collected and CAI will be developed for administering the PAS with key grantee personnel and center directors.


With regard to collecting qualitative data through on-site interviews, each site visit interview will involve two members of the study team, with one asking questions and a second typing close to verbatim notes capturing key quotes and responses on a laptop. An audio recorder will be used with permission to later confirm direct quotes or other details from the sessions.


A4. Efforts to Identify Duplication


There is no other current or planned effort to collect data regarding the validity of the DRS or regarding local program efforts to improve quality in response to the DRS. The DRS is a new system and has not yet been evaluated.


A5. Involvement of Small Organizations


Information being requested or required has been held to the minimum required for the intended use. Most of the 70 organizations included in the study will be small organizations, including community-based organizations (Community Action Agencies), other non-profit organizations, school districts, government agencies, and for-profit organizations.


Burden will be minimized for respondents by restricting the length of interviews and classroom observations to the minimum required, by conducting interviews on-site or on the telephone at times that are convenient to the respondent, and by requiring no record-keeping or written responses by respondents.


A6. Consequences of Less Frequent Data Collection


This is a onetime data collection.



A7. Special Circumstances


There are no special circumstances for this data collection.


A8. Federal Register Notice and Consultation


Federal Register Notice and Comments


In accordance with the Paperwork Reduction Act of 1995 (Pub. L. 104-13) and Office of Management and Budget (OMB) regulations at 5 CFR Part 1320 (60 FR 44978, August 29, 1995), ACF published a notice in the Federal Register announcing the agency’s intention to request an OMB review of this information collection activity. This notice was published on June 11, 2013, Volume 78, Number 112, page 35,038 and provided a sixty-day period for public comment. A copy of this notice is attached as Appendix P. During the notice and comment period, 15 requests for copies of the information collection instruments and 26 comments were received.


Copies of the draft instruments were sent by email to each of the 15 requestors. Twenty-one comments related to the DRS itself, rather than the evaluation and, thus, did not have implications for the design of the current information collection request. However, the evaluation team considered how the study can address the issues or concerns raised in those comments (e.g., concerns about the validity of the CLASS condition were expressed in several comments and this issue be examined in the proposed study). One comment received was from a local program expressing interest in participating in the study.


Three provided substantive comments on the proposed information collection. Those comments included overall support for the study, support for the inclusion of qualitative methods, a specific comment on the wording of items in the qualitative interviews (which were addressed), and an appeal to include a focus on the following issues in the evaluation: validity of deficiency determinations, use and validity of CLASS, burden of DRS on grantees, transparency and communication, conduct of on-site monitoring reviews, unintended consequences of DRS, comprehensive picture of quality, whether program quality changes over time, and an appeal process. The evaluation team reviewed the comments and ensured that each of these areas of interest are being addressed in the proposed plan for the evaluation to the extent possible. For example, the study will examine the validity of a deficiency (RQ1), the use and validity of the CLASS in monitoring (RQ1), and issues related to burden, transparency, and unintended consequences of the DRS (RQ2 and 3). The study will capture a comprehensive picture of quality by using multiple measures of quality to examine the validity of the DRS (RQ1) and complimenting those measures with qualitative interviews (RQ2). Issues related to the conduct of on-site monitoring reviews will not be examined in the DRS evaluation because OHS already has quality assurance procedures in place to examine the integrity and conduct of on-site monitoring reviews; to examine them in the scope of this project would be duplicative. Additionally, associations between DRS and changes in program quality over time will not be examined in this study because it can be examined more efficiently and effectively in other Head Start research studies (e.g., FACES) and to do so in this study would be duplicative. Finally, it will not be possible to examine an appeals process in the current study because currently no appeal process exists and this is a descriptive study of the system as it is implemented currently. Examining the efficacy of adding an appeals process to the DRS could be a question for future research.


Finally, one commenter expressed general concern about the study, although many of the concerns highlighted a misunderstanding of the goals of the project, the methodology, and the public comment period. These included concerns that the sample size for the qualitative component is to small to be representative, that there is not alignment of interview questions across respondents, that it is not useful to examine the competition process at all, and an overall concern that the design has not be finalized. The evaluation team considered these concerns and notes: (a) sample sizes for RQ2 and RQ3 are typical and adequate for the purpose of qualitative methods, which are not designed to be representative; (b) interview questions are tailored to the particular respondent and the perspective available from his/her position in the organization, thus, are not designed to be identical across respondents; (c) competition is conceptualized as a key mechanism for change associated with the DRS and is thus included in the study; and (d) the design is not considered proposed pending review by OMB. Other comments included concerns that the study will not examine whether quality changes over time and that the estimated burden seems understated. Burden estimates are based on extensive experience by ACF and the evaluation team with conducting research projects of similar size and scope and using the same or similar measures. As noted in B.4, the research team will pilot the full data collection battery with several classrooms and centers to confirm timing and scheduling prior to official data collection. As noted above, associations between DRS and changes in program quality over time will not be examined in this study because it can be examined more efficiently and effectively in other Head Start research studies (e.g., FACES) and to do so in this study would be duplicative.

 

Consultation with Experts Outside of the Study


The contractor consulted with independent experts to provide advice on the conceptualization and design of the study. The experts included economists, psychologists, measurement, management, and program evaluation experts with expertise in evaluating government initiatives, Quality Rating and Improvement Systems (QRIS), Head Start or in the role of competition in communities. The experts consulted include:


Greg Duncan, Distinguished Professor, School of Education, University of California at Irvine
Stephanie Jones, Marie and Max Kargman Associate Professor in Human Development and Urban Education Advancement, Graduate School of Education, Harvard University

Christine McWayne, Associate Professor of Child Development, Eliot-Pearson Department of Child Development, Tufts University

Kathryn Newcomer, Professor and Director, Trachtenberg School of Public Policy and Public Administration, George Washington University

Kathryn Tout, Co-Director, Early Childhood Research & Senior Research Scientist, Child Trends

Mildred Warner, Professor, Department of City and Regional Planning, Cornell University


A9. Incentives for Respondents


The study design for RQ1 relies on roughly equal participation of grantees designated for competition and grantees not designated for competition. The study team is concerned that lower-performing grantees will be less likely to participate in the study. The study has observed from our outreach efforts, as well as from comments received in response to the 60-day federal register notice, that some grantees have a fear and distrust of the DRS performance system that extends to the evaluation study. They are concerned that the evaluation team could observe them doing something inappropriate that could jeopardize their designation status (e.g., lead to a finding of a monitoring deficiency or low CLASS score). This concern is likely to be greatest among lower-performing grantees or lower-performing classrooms/teachers within grantees and the study team believes this could lead to differential nonresponse. For example, experiments in nonresponse bias by Groves and colleagues (2006) indicate that when the topic of a study generates negative thoughts or reminders of past failures in potential respondents, the likelihood of participation declines. Provision of monetary appreciations, however, induced participation by individuals despite those negative thoughts and reminders of past failures, thereby reducing nonresponse bias. Because the integrity of the design for RQ1 relies on roughly equal participation from lower and higher performing grantees, as well as lower and higher performing classrooms within grantees, reducing differential nonresponse is critical. Thus, we propose to offer grantees and classroom teachers participating in the study components associated with RQ1 gifts of appreciation for their time and participation.

Teachers whose classrooms are observed will be offered $25 gift cards. This rate is actually lower than what has been provided in previous similar studies, but we believe it will be sufficient. For example, in NCEDL’s Multi-State Study of Prekindergarten funded by the Institute of Education Sciences, classroom teachers were observed using the CLASS and ECERS-R and were provided with $100 gift certificates in appreciation (Clifford, Bryant, Early, Burchinal, & Winton, 2003). Likewise, in the Evaluation of the North Carolina More at Four Program, classroom teachers were observed using the CLASS, ECERS-R, and ELLCO and were offered $50 for their participation (Peisner-Feinberg, 2011). Both the FACES and Baby FACES studies also provide gifts valued at approximately $25 in appreciation of classroom teachers’ participation.


Grantees that agree to participate in the DRS Evaluation will be offered a $50 gift card per center sampled up to $500. For example, if a grantee has one sampled center, they will be offered $50. If a grantee has three sampled centers, they will be offered $150. If a grantee has 10 or more sampled centers, they will be offered $500. This amount is consistent with previous information collections approved by OMB. For example, in NCEDL’s Multi-State Study of Prekindergarten, each center was offered $50 gift certificates for their participation (Clifford et al., 2003). In the QUINCE study, an ACF funded evaluation of training models for child care providers, each center was offered $50-100 for participation (Bryant et al., 2009). Finally, in both the FACES and Baby FACES studies, grantees were provided with $500 for their participation.



A10. Privacy of Respondents


Information collected will be kept private to the extent permitted by law. Respondents will be informed of all planned uses of data, that their participation is voluntary, and that their information will be kept private to the extent permitted by law. This privacy language is included in the study recruitment materials (Appendix O), in scripts for the DRS Telephone Interview: Program Directors (Appendix F), and on all written informed consent forms.


As specified in the contract, the Contractor shall protect respondent privacy to the extent permitted by law and will comply with all Federal and Departmental regulations for private information. The Contractor has developed a data security compliance plan that ensures all protections of respondents’ personally identifiable information. Data security and participant privacy will be maintained through appropriate training for all research staff regarding research ethics. All research staff have or will complete training and obtain certification in human subjects’ protections and submit a copy of their institutions IRB-approved training competition certificate (Urban Institute “Protecting Human Subjects at the Urban Institute” or UNC “Responsibilities of Staff in Human Subjects Research”) to the Principal Investigators. In addition all research staff will be required to sign their institution’s confidentiality pledges (see Appendix S for Urban Institute’s Staff Confidentiality Pledge and University of North Carolina’s Responsibilities of Staff in Human Subjects Research). In this way, research staff will agree to abide by the informed consent process and not to divulge, publish, or otherwise reveal to unauthorized persons any information obtained during the study.


Verbal informed consent will be requested from Head Start program directors at the grantee level during the initial study recruitment (Appendix O1) and from center directors during center recruitment (Appendix O2). Head Start program directors who participate in follow-up telephone interviews will be asked to provide verbal informed consent prior to participation in the interview. (Appendix F). Written informed consent will be requested from Head Start teachers while on-site, prior to doing classroom observations or teacher interviews. (Appendix O3). Prior to the start of each on-site semi-structured interview for the qualitative portions of the study, the researchers will assure the respondents that the information provided will be kept private to the extent permitted by law and will request written informed consent (Appendices G-M, O5-O6).


All records will be kept private to the extent permitted by law and data will be separated from individual identifiers of participants. Each participant will be assigned a unique identification number with the master list linking names and ID numbers stored separately from the data. Only project staff will have access to this list. As soon as possible, after data collection, all personal identifiers will be removed from all data files and the tracking files will be destroyed to further prevent the possibility of identification of individuals and/or disclosure of private data. A secure tracking system using participant IDs will be used to connect pre-test and post-test data. Access to all forms of study data (electronic and hardcopy) will be restricted to research staff only. Each organization will protect data by storing electronic data on a secure, password protected database and all hardcopies of data will be stored in locked filing cabinets in the Principal Investigators offices. Any oral or written reports drawing on the study data will contain no identifying information that would link individuals to specific locations or data.


This study is also under the purview of the Institutional Review Boards (IRBs) of both the Urban Institute and the University of North Carolina-Chapel Hill. See Appendices Q and R.


This study is also seeking a Certificate of Confidentiality through the National Institutes of Health to confer further protections to study participants. With this Certificate, the study team cannot be forced to disclose information that may identify participants, even by a court subpoena, in any federal, state, or local civil, criminal, administrative, legislative, or other proceedings.


A11. Sensitive Questions


There are no sensitive questions in this data collection.


A12. Estimation of Information Collection Burden


Table A-4 shows the estimated annual burden hours of the information collection. Estimates are based on the length of the questionnaires and interviews and the contractor’s experience with similar data collection efforts. This is a two year information collection request and the annual burden hours are estimated to be 669.


Total Burden Requested Under this Information Collection


Table A-4: Estimated Burden in Annualized Hours and Costs

Instrument (and appendix number).

Total Number of Respondents

Annual Number of Respondents

Number of Responses Per Respondent

Average Burden Hours Per Response

Annual Burden Hours

Average Hourly Wage ($)

Total Annual Cost ($)

Quality Measures Follow Up Interview: Teachers (C)

560

280

1

0.4

112

$14.79

$1,656

Quality Measures Follow Up Interview: Center Directors (D)

300

150

1

1.85

278

$24.55

$6,825

Quality Measures Follow Up Interview: Program Directors (E)

70

35

1

1.1

39

$24.55

$957

DRS Telephone Interview: Program Directors (F)

35

18

1

1.25

23

$24.55

$565

DRS In-Depth Interview: Agency Directors (G)

15

8

1

1

8

$24.55

$196

DRS In-Depth Interview: Program Directors (H)

15

8

1

1.5

12

$24.55

$295

DRS In-Depth Interview: Policy Council/ Governing Body (I)

75

38

1

1.5

57

$24.55

$1,399

DRS In-Depth Interview: Program Managers (J)

45

23

1

1.5

35

$24.55

$859

Competition In-Depth Interview: Agency and Program Directors (K)

18

9

1

1.25

11

$24.55

$295

Competition In-Depth Interview: Policy Council/ Governing Body (L)

45

23

1

1.5

35

$24.55

$859

Competition In-Depth Interview: Program Managers (M)

27

14

1

1.5

21

$24.55

$516

Competition Data Capture Sheet (N)

500

250

1

0.15

38

$24.55

$933

Estimated Annual Burden Sub-total

669


$15,355

Note: Estimates may not sum to totals due to rounding.


Total Annual Cost


The estimated total annualized cost burden to respondents is $15,355 based on the burden hours and estimated hourly wage rates for each data collection instrument, as shown in the three right-most columns of Table A-4. These estimates are based on:

  • A mean hourly wage of $24.55 for Head Start program directors, center directors, and services managers and coordinators based on “Education Administrators, Preschool and Child Care Centers/Programs”, as reported in the May 2012 U.S. Department of Labor, Bureau of Labor Statistics, Occupational Employment and Wage Estimates, http://www.bls.gov/oes/current/oes119031.htm). Similar wages were assumed for members of the governing body and policy council; and

  • A mean hourly wage of $14.79 for Head Start teachers and other front-line staff, based on “Preschool Teachers, Except Special Education,” as reported in the May 2012 U.S. Department of Labor, Bureau of Labor Statistics, Occupational Employment and Wage Estimates) http://www.bls.gov/oes/current/oes252011.htm. Note that an analysis of the 2011 PIR data suggests annual Head Start teacher salaries are similar to those reported by the BLS. (Schmit, 2012).


A13. Cost Burden to Respondents or Record Keepers


There are no additional costs to respondents or record keepers.


A14. Estimate of Cost to the Federal Government


The total cost for the data collection activities under this current request will be $3,059,411. Annual costs to the Federal government will be $1,529,706 for the proposed data collection under this OMB clearance number.


A15. Change in Burden


This is a new data collection.


A16. Plan and Time Schedule for Information Collection, Tabulation and Publication


Analysis Plan


Analysis Plan for RQ1


The primary analysis will compare grantees that are and are not designated for competition. We will use this approach because designation status is inherently binary (regardless of how many deficiencies are found or conditions are failed). In addition, the analysis plan must reflect whether the measures are being collected at the grantee, center, or classroom level – using a random effects model that accounts for nesting of classrooms in centers and centers in grantees. This could be done with a hierarchical linear model (i.e., general linear mixed model), survey sampling methods, or OLS with corrections for clustering. Analyses will compare the extent to which designated grantees have lower quality on the six Head Start quality constructs identified for this study (Table A-2) – considered together if measures are sufficiently correlated to allow forming quality composites and examined individually. Analyses will create new quality composites based on the literature and team expertise and compare the grantees by designation status.


  1. DRS classification. First, our analysis plan compares the grantees’ DRS status (designated vs. not designated) on each of the independent measures proposed for this data collection to test whether the DRS is successfully identifying lower quality programs to compete for their grants. In these analyses, the mean scores for grantees that are designated to compete will be compared to the mean scores for grantees that are not designated to compete on the classroom quality measures, health and safety checklist, PAS subscale scores, and Tuckman & Chang ratios. We will use analytic methods that account for variability within and between grantees. We have reasonable power to detect meaningful differences between the two designation groups; see pp. 8-9 of Supporting Statement B for power analysis information.


We will also examine the correlations among the independent quality measures. Each of the measures will be aggregated at the grantee level by computing means across classrooms or centers. We will examine correlations, first within construct and then across constructs.


Second, we will conduct discriminant analyses using all of the program quality measures aggregated to the grantee level. We will determine the extent to which these measures collectively discriminate between grantees that are and are not designated for competition to address the overarching question about whether the DRS is ensuring that grants for Head Start grantees that are viewed as low quality by each criterion are placed into open competition. Third, we will categorize our independent measures of program quality based on professional standards. Then we will compute the conditional probabilities that grantees that score lower on these measures were designated for competition. In addition, we will compute the conditional probabilities that grantees that score higher on these are not designated for competition.


  1. CLASS consistency. An issue that the evaluation team can address that is of great concern in the field is the reliability of the DRS CLASS scores. This issue will be examined by comparing the grantee-level CLASS domain scores from the DRS monitoring with CLASS domain scores from the evaluation team. We will correlate the scores to see the extent they agree in terms of variability. We will look at the cross-tabs and compute a Cronbach’s Kappa to see the extent to which grantees with a CLASS domain score in the “lower quality” range according to the DRS is also in the “lower quality range” according to the evaluation team. Kappas are proposed, rather than simply looking at agreement, to account for agreements that might happen due to chance.


  1. DRS classification by individual criteria and aligned quality measures. In addition to examining the DRS at the global level, we will examine each condition to determine the level of agreement with an independent measure of quality. Table A-2 shows the various constructs being measured by the DRS and the independent measure of quality selected to assess each construct. The findings from 2012 DRS show that 123 grantees were designated for competition (see Table A-3 on p. 15), and two of the seven criteria accounted for over 99% of the cases in which agencies were designated for competition: monitoring deficiencies and CLASS scores. First, we will compute descriptive statistics by DRS status. Second, our analysis plan will examine the quality of measurement of DRS within two specific conditions. These analyses will test the extent to which grantees rated as lower quality on a specific condition of the DRS are also rated as lower quality on independent assessments of quality. We will focus on two DRS criteria, deficiencies and low CLASS scores. Chi-square analyses will test agreement between the DRS condition and the related measure in the evaluation battery. Table A-5 shows how we will match the DRS condition with our independent measurement. We will examine the extent to which a lower quality rating according to the DRS condition agrees with a lower quality rating according our measurement. We will use standard definitions of low quality when they exist (e.g., 3 or lower on ECERS or 2 and below on a PAS subscale) and the bottom quintile for any scale that does not have a priori cut points defining low quality. A power analysis was conducted (see Supporting Statement B, p.9) and shows we have reasonable power to detect agreement.


Table A-5: DRS Condition and Independent Measures of Quality


Emotional Support

Instructional Support

Classroom Management

Monitoring Deficiency

DRS Measurement

Low CLASS Emotional Support Score

Low CLASS Instructional Support Score

Low CLASS Classroom Management Score

Deficiency in any area

Independent Measure of Quality

Low ECERS Interactions

Scale

Low ECERS-E

Total Score

Low TSRS Management

Scale

Substantial problems on health and safety checklist; low PAS score on any subscale; low Tuckman & Chang ratio




  1. Calculating the Financial Vulnerability Ratios. The Tuckman and Chang (1991) financial vulnerability ratios will be conducted for each organization in the sampled group for which data are available (nonprofit organizations). The ratios are calculated as follows:


Table A-6 Tuckman & Chang Computation and Variables


Ratio

Computation

Form 990 Variables

Equity Ratio

Revenue Concentration

Administrative Cost Ratio

Operating Margin


These ratios will be compared across organizations designated to compete and organizations not designated to compete to determine if significant differences are reflected between the ratio scores. These ratios can only be computed for nonprofit organizations, which means that this measure cannot be used to assess the approximately 30% of grantees that are administered through public school systems, government agencies, for profit or tribal organizations (OHS, 2012). Due to their independence, however, nonprofit organizations are the ones most likely to suffer financial insecurity issues.







Analysis Plan for RQ2


The analysis plan for RQ2 is designed to yield rigorous, objective findings that describe the range of ways that Head Start grantees understand and respond to the DRS. In addition, the analysis plan calls for some comparative analysis across themes to build hypotheses about the relationships between various types of DRS responses, views of the DRS, program characteristics, and contextual factors for the purpose of understanding whether, where, and how the DRS may be affecting Head Start program quality.


Following the telephone interviews (Spring 2014) and site visits (Fall 2014), interviewers will clean their notes to create targeted transcripts, meet as a team to debrief on experiences, and decide on a coding scheme that will be used to code the qualitative data. The coding scheme will include categories that systematically capture different types of: grantee understanding of the DRS; actions taken in response to the DRS, and related training and technical assistance needs. Specific themes within these categories will be identified after the data are collected and preliminary within-case analysis is conducted. A small group of qualitative analysts will code each interview transcript and subsequently analyze the coded themes to identify patterns across grantees and differences in patterns based on program and contextual characteristics. Data on contextual characteristics for each case will be drawn from both interviews and from secondary data sources such as the PIR and census or other data about the community being served.


Coding and analysis will be done with the assistance of NVivo (QSR International, Inc), a software package that is designed to assist in managing, structuring, and analyzing qualitative data such as interview text through functions that support the classification, sorting and comparing of text units. Analysts will code each interview independently following the predefined coding scheme. In addition to comparing coding and findings across sites, coding will be compared within each site to look for patterns in responses within a given grantee and whether there were consistencies or inconsistencies in information provided or differences in perspectives. Analysts will also compare sites according to their eventual designation status, to explore whether outcomes on the DRS appear to be related to grantee views of, and responses to the DRS.


Analysis Plan for RQ3


The CDCS data will be analyzed using descriptive statistics of each grant competition, as well as the competition as a whole, including types, auspices, and ages of organizations competing, previous affiliation status with Head Start, the headquarters of the organization as compared to the location of the community, the presence of delegates, partnerships, and matching dollars, and the types of service delivery options. Analyses will enable us to learn more about the characteristics of the organizations competing, and the characteristics of the organizations winning competitions including the types of services they propose to offer and the numbers of children they propose to serve.

Coding and analysis will be done with the assistance of NVivo (QSR International, Inc), a software package that is designed to assist in managing, structuring, and analyzing qualitative data such as interview text through functions that support the classification, sorting and comparing of text units. Analysts will code each interview independently following the predefined coding scheme that focuses on topics indicated as important in the literature such themes as development or loss of partnerships, development or loss of resources, etc. In addition to comparing coding and findings across sites, coding will be compared within each site to look for patterns in responses within a given grantee and whether there were consistencies or inconsistencies in information provided or differences in perspectives.


The within-cohort data will also be linked and compared. For example, Cohort 3 will have retrospective interview data and actual data about the competition. Linking the two will help to explore the extent to which perceptions about the competitive process match the actual competition. Understanding where there are differences between perceptions and reality are important because perceptions tend to drive actions. Finding where the differences are and if patterns of differences exist will inform recommendations about how to help in aligning perceptions and reality which could be important in incentivizing changes in quality.


Finally, the primary data collected will be linked to secondary data sources that describe the community, like census data. This will provide some context which may offer additional insights about the levels, types, and perceptions of competition.


Time Schedule and Publication


Table A-7: Time Schedule and Publication

Timing

Activity

Spring 2014

DATA COLLECTION

Instruments with Burden:

Quality Measures Follow Up Interview: Teachers

Quality Measures Follow Up Interview: Center Directors

Quality Measures Follow Up Interview: Program Directors

DRS Telephone Interview: Program Directors

Competition Data Capture Sheet


Instruments without Burden:

CLASS, ECERS-R, ECERS-E, Adapted TSRS, PAS, and Health & Safety Checklist

Summer/Fall 2014

DATA ANALYSIS

Instruments with Burden:

Quality Measures Follow Up Interview: Teachers

Quality Measures Follow Up Interview: Center Directors

Quality Measures Follow Up Interview: Program Directors

DRS Telephone Interview: Program Directors

Competition Data Capture Sheet


Instruments without Burden:

CLASS, ECERS-R, ECERS-E, Adapted TSRS, PAS, and Health & Safety Checklist


Secondary Data:

IRS Form 990 Data

Census Data


Administrative Data:

OHS PIR and Monitoring Data

Fall 2014

DATA COLLECTION

Instruments with Burden

DRS In-Depth Interview: Agency Directors

DRS In-Depth Interview: Program Director

DRS In-Depth Interview: Policy Council/Governing Body

DRS In-Depth Interview: Program Managers

Winter/Spring 2015

DATA ANALYSIS

Instruments with Burden:

Quality Measures Follow Up Interview: Teachers

Quality Measures Follow Up Interview: Center Directors

Quality Measures Follow Up Interview: Program Directors

DRS Telephone Interview: Program Directors

DRS In-Depth Interview: Agency Directors

DRS In-Depth Interview: Program Director

DRS In-Depth Interview: Policy Council/Governing Body

DRS In-Depth Interview: Program Managers


Instruments without Burden:

CLASS, ECERS-R, ECERS-E, Adapted TSRS, PAS, and Health & Safety Checklist



Secondary Data:

IRS Form 990 Data

Census Data


Administrative Data:

OHS PIR Data, Monitoring Data, DRS Designation Data

Spring 2015

DATA COLLECTION

Instruments with Burden

Competition In-depth Interview: Agency and Program Directors

Competition In-depth Interview: Policy Council and Governing Body

Competition In-depth Interview: Program Managers

Summer 2015

DATA ANALYSIS

Instruments with Burden

Competition In-depth Interview: Agency and Program Directors

Competition In-depth Interview: Policy Council and Governing Body

Competition In-depth Interview: Program Managers

Competition Data Capture Sheet


Secondary Data:

Census Data


Administrative Data:

OHS PIR Data, Monitoring Data, DRS Designation Data

Fall 2015

FINAL REPORT


A17. Reasons Not to Display OMB Expiration Date


All instruments will display the expiration date for OMB approval.


A18. Exceptions to Certification for Paperwork Reduction Act Submissions


No exceptions are necessary for this information collection.

References


Administration for Children and Families (2010). Head Start Impact Study: Final Report. Washington, DC: U.S. Department of Health and Human Services. Retrieved from: http://www.acf.hhs.gov/sites/default/files/opre/executive_summary_final.pdf


Administration for Children and Families. (2011). Report to Congress on the Final Head Start Program Designation Renewal System. Washington, DC: U.S. Department of Health and Human Services. Retrieved from http://eclkc.ohs.acf.hhs.gov/hslc/mr/rc/Head_Start_Designation_Renewal_System_Final_Rule.pdf


Aronson, S. S.  [ed]. (2002). Healthy young children: A manual for programs (4th ed). Washington, DC: National Association for the Education of Young Children.

Bierman, B. L., Domitrovich, C. E., Nix, R. L., Gest, S. D., Welsh, J. A., Greenberg, M. T., Blair, C., Nelson, K., & Gill, S. (2008). Promoting academic and social-emotional school readiness: The Head Start REDI Program. Child Development, 79, 1802-1817.

Bryant, D. M., Wesley, P. W., Burchinal, M., Sideris, J., Taylor, K., Fenson, C., & Iruka, I. U. (2009). The QUINCE-PFI study: An evaluation of a promising model for child care provider training: Final report. FPG Child Development Institute, Chapel Hill, NC.


California Childcare Health Program. (2005). CCHP Health and Safety Checklist – Revised. San Francisco, CA: University of California, San Francisco School of Nursing.


Cassidy, D. J., Hestenes, L. L., Hegde, A., Hestenes, S., & Mims, S. (2005). Measurement of quality in preschool child care classrooms: An exploratory and confirmatory factor analysis of the Early Childhood Environment Rating Scale-Revised. Early Childhood Research Quarterly, 20, 345-360.


Clifford, R., Bryant, D.M., Early, D.M., Burchinal, M.R., & Winton, P.J. (2003). National Center for Early Development and Learning. FPG Child Development Institute, Chapel Hill, NC.


Clifford, R. M., Reska, S., & Rossbach, H. (2010). Reliability and validity of the early childhood environment rating scale. Unpublished manuscript.


Denny, J. H., Hallam, R., & Homer, K. (2012). A multi-instrument examination of preschool classroom quality and the relationship between program, classroom, and teacher characteristics. Early Education and Development, 23, 678-696.


Groves, R.M., Presser, S., and Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, vol. 68(1): 2-31.


Groves, R.M., Couper, M.P., Presser, S. Singer, E., Tourangeau, G.P.A., and Nelson, L. (2006). Experiments in producing nonresponse bias. Public Opinion Quarterly, vol. 70(5): 720-736.


Harms, T., Clifford, R., & Cryer, D. (2005). The Early Childhood Environment Rating Scale (revised edition). New York: Teachers College Press.


Hatry, H. & Newcomer, K.E. (2010). In Wholey, J. S., Hatry, H., Newcomer, K.E. (Eds). Handbook of practical program evaluation, third edition (pp. 557-580). San Francisco, CA: John Wiley & Sons.


Hefetz, A. & Warner, M. (2011). “Contracting or Public Delivery? The Importance of Service, Market and Management Characteristics.” Journal of Public Administration Research and Theory, 22(2), 289-317.


Kincaid, J. (1991). “The competitive challenge to cooperative federalism: a theory of federal democracy.” In Kenyon, D. and Kincaid, J. (Eds.), Competition among State and Local Governments (pp. 87-114). Washington, D.C., Urban Institute Press.


Lower, J. K., & Cassidy, D. J. (2007). Child care work environments: The relationship with learning environments. Journal of Research in Childhood Education, 22(2), 189-204.


Mashburn, A. J., Pianta, R. C., Hamre, B. K., Downer, J. T., Barbarin, O. A., Bryant, D., Burchinal, M., & Early, D. (2008). Measures of classroom quality in prekindergarten and children’s development of academic, language, and social skills. Child Development, 79(3), 732-749.


Miles, M. B., A. Huberman, M., & Saldana, J. (2013). Qualitative Data Analysis: A Methods Sourcebook, Third Edition. Thousand Oaks, CA: Sage Publications.


NAEYC and NACCRRA. (2011). Early Childhood Education Professional Development: Training and Technical Assistance Glossary.


Office of Head Start. (2012). Program Information Reports [Data file]. Washington, DC: Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from http://eclkc.ohs.acf.hhs.gov/hslc/mr/pir.


Peisner-Feinberg, E. (2011). Evaluation of the North Carolina More at Four Program. FPG Child Development Institute, Chapel Hill, NC.


Pianta, R. C., La Paro, K., Hamre, B. K. (2008) Classroom Assessment Scoring System (CLASS). Baltimore, MD: Paul H. Brookes Publishing Co.


Raver, C. C., Domitrovich, C., Greenberg, M., Morris, P. A., & Mattera, S. (2012). Adapted Teaching Style Rating Scale. New York, NY: MDRC.


Rohacek, M. et al. (2010). Understanding Quality in Context: Child Care Centers, Communities, Markets, and Public Policy. Washington, DC: The Urban Institute. Retrieved from http://www.urban.org/publications/412191.html.


Schmit, S. (2012). Head Start Participants, Programs, Families and Staff in 2011. Washington, DC: Center for Law and Social Policy. Retrieved from: http://www.clasp.org/admin/site/publications/files/HSpreschool-PIR-2011-Fact-Sheet.pdf


Sylva, K., Siraj-Blatchford, I., & Taggart, B. (2003). Assessing quality in the early years: Early Childhood Environment Rating Scale-Extension (ECERS-E): Four curricular subscales. Stoke-on Trent, UK: Trentham Books.


Talan, T. N., & Bloom, P. J. (2011). Program Administration Scale, Second Edition. New York NY: Teachers College Press.


Trochim, W. M. K. & Donnelly, J. P. (2008). The research methods knowledge base third edition. Mason, OH: Cengage Learning, Atomic Dog.


Tuckman, H., & Chang, C. (1991). Methodology for Measuring the Financial Vulnerability of Charitable Nonprofit Organization. Nonprofit and Voluntary Sector Quarterly 20 no.4: 445-460.


Warner, M., & Hefetz, A. (2003). “Rural-urban differences in privatization: limits to the competitive state.” Environment and Planning, 21(5), 703-718.


42


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOPRE OMB Clearance Manual
AuthorDHHS
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy