Background on Collection of Information
The purpose of the site visits will be to gather data about the quality of the site’s program practices using other sources of data collection, including two copyrighted, valid, and reliable observation tools and an interview with the site coordinator. Two site visitors, trained to use the interview protocols and observation instruments, will visit the site for 2 days.
Procedures for On-Site Observation Instruments
The two observation tools will be administered by trained observers. These tools are the Early Childhood Environment Rating Scale–Revised (ECERS–R) and the Classroom Assessment Scoring System Pre-K (CLASS). The ECERS–R is a reliable and valid tool that can be used to assess the quality dimensions of prekindergarten and kindergarten classrooms, ranging from facility space and materials to programming and interpersonal features. It has the added benefit of assessing how provisions are made for children with disabilities, as well as how the materials and staff promote the acceptance of diversity. The CLASS was chosen to supplement the ECERS–R and provide further detail on the interpersonal interactions among staff and students and the quality of instruction provided to students. The CLASS has an explicit focus on instruction and the intent of instruction, which is not adequately captured using the ECERS–R. There are versions for both preschool and kindergarten classrooms, with a common metric across both versions so that scores can be compared. It is widely used to assess process measures in the classroom environment.
At least two trained observers will spend approximately 2 days at each site to administer the ECERS–R and the CLASS. On Day 1, each of the trained observers will use the ECERS–R, and on Day 2, each will use the CLASS. Observations will be paired to facilitate monitoring of inter-rater reliability and scoring accuracy. Observers are certified in the use of all instruments and must meet minimum levels of inter-rater reliability (≥ .85) on the instruments.
It is possible that additional activities for prekindergarten and kindergarten children will occur in more than one classroom. The observers will gather data in up to two classrooms for the purpose of the analysis. If more than two classrooms serve children in the targeted age group, then the two classrooms will be randomly selected from the total number available for observation. Each observer has had graduate training in education, as well as experience with school systems and with qualitative data collection. When there is more than one 21st CCLC classroom at a site, at least two classrooms will be chosen randomly for observation.
In addition to administering the ECERS–R and CLASS site observation instruments, the site visitors will interview the site coordinator using a semi-structured interview protocol of 10 questions. The purpose of this interview is to gather more in-depth information about program practices that can further explain and verify information provided by the site observation instruments.
Analyses (Observations and Interviews)
The analysis of the case studies will assess the variation in the structure and content of offerings among centers serving young children and will help to identify promising practices in afterschool programming. This information will be integrated within the body of the Implementation Report, which is organized around the key study questions, using the data collected from the case studies and interviews from the set of 40 sites. We describe our approaches for each in the sections below.
Analysis of observation data. The ECERS–R consists of 43 items organized into 7 subscales. The subscales measured are Space and Furnishings, Personal Care Routines, Language-Reasoning, Activities, Interaction, Program Structure, and Parents and Staff. The items and scales are designed to assess classroom quality. The ECERS–R (short form) uses 10 items, randomly selected from the 43, to assess the subscales. The 10 items are randomly selected for each ECERS–R (short form) administration. Scores are assigned such that 1 = inadequate; 4 = minimal; 5 = good; and 7 = exemplary. Each subscale receives a score ranging from 1 to 7, based on the observations of each of the items within that subscale. Ultimately, the average for each subscale scored is computed.
Reliability for the ECERS–R, using inter-correlations, ranges from 0.71 to 0.88 for the subscales and is 0.92 overall. Inter-rater agreement is monitored on a regular basis in order to keep reliability high. A panel of experts in early childhood education contributed to the development of concurrent validity for the items and subscales. Predictive validity has been measured using correlations of ECERS–R scores with various measures of achievement or aptitude. The numeracy items correlate with the Woodcock-Johnson-R math achievement applied problems subset. Language and Literacy items correlate highly with the PPVT-III, the PPVT-R, and the Oral Expression Scale. Social outcomes, such as measures of independence, concentration, cooperation, and conformity, are correlated to ECERS–R items measuring social interaction and support. There is a high correlation of ECERS–R scores with measures derived with the Teacher-Child Rating Scale.
The subscale scores will be used as a marker for classroom quality, with those programs approaching average scores of 5-7 being classified as higher quality and those scoring below 3 classified as providing lower quality programs.
The CLASS assesses three empirically derived domains: Emotional Support, Classroom Organization, and Instructional Support. Like the ECERS–R, the CLASS items are measured on a 1-7 scale, with 1 indicating a minimally effective characteristic and 7 indicating a highly effective characteristic. As with the ECERS–R, the item observations are scored, and a subscale score is computed for each domain and subdomain. The subdomains assessed for each of the 3 domains are Positive Climate, Negative Climate, Teacher Sensitivity, Regard for Student Perspective (Emotional Support); Behavior Management, Productivity, Instructional Learning Formats (Classroom Organization); and Concept Development, Quality of Feedback, and Language Modeling (Instructional Support). An overall total (average) is computed. Scores for each of the subdomains also range from 1 to 7, and are used similarly to the ECERS–R to classify overall quality within each area measured.
As with the ECERS–R, inter-rater reliability is monitored for the CLASS observers. Alpha reliability from a recent CLASS administration ranged from .52 to .96, with a total alpha at .81. The lowest alphas were in areas measuring instructional learning format, regard for student perspective, and productivity. Inter-rater reliability is regarded as acceptable to high for the CLASS.
Summary tables of these data will be created by site and used to analyze hypotheses relating to the organization of instruction. On the basis of these summary tables, we will identify specific aspects of instruction that vary and aspects that are similar across sites.
Analysis of interview data. There will be a total of 40 site coordinator interviews conducted on-site by site observers. Interviews will be audio recorded, and transcriptions will be done by the research team. The interview questions will yield important qualitative information about the questions of interest and will assist in interpreting the site observations. The interview questions and probes were specifically designed to supplement and address the primary research questions as well as relevant areas program quality, policies and practices, communication and collaboration, and staff development.
Analysis of qualitative data will consist of a coding schema that first establishes the 10 interview questions as the broad categories under which question-specific data will be subcategorized. Each interview will be analyzed by two coders, who will use an open-coding approach to determine subcategories for each question. Each coding team will then debrief to compare and discuss their coding results. We will produce summaries of the coding results, aggregated from the individual interviews, to construct tables that provide a sense of site-level patterns in the data. We will also analyze frequently occurring codes to further refine our analyses of these patterns. While analysis will be done initially at the site level, we will also look across centers for recurrent patterns and emerging themes in the data. This data, along with observation data, will inform the final Implementation Report as well as the final Implementation Guide.
Data from the Profile and Performance Information Collection System (PPICS) were used to generate the universe of 21st CCLC sites. Beginning with all sites in the sample (n = 8,900), the dataset was first screened to include only those schools that were (1) active and in good standing; (2) serving prekindergarten and/or kindergarten children; and (3) not in the final year of their 21st CCLC grants. A total of 2,182 sites met these criteria. Estimates based on the funding now available suggest that 40 site visits are feasible, given the project costs and timeline. These sites will be chosen randomly from the 2,182 sites meeting our inclusion criteria. Thus, each selected program will represent, on average, 55 programs (2,182 ÷ 40 = 54.55).
B.2.1 Statistical Methodology for Stratification and Sample Selection
Sites will be randomly selected from the sample of 2,182 sites identified through the Profile and Performance Information Collection System (PPICS). The random sampling is necessary because there are no data available to identify higher quality programs or programs that appear to be producing a certain set of child outcomes. Because this will be a qualitative study, there are no set rules for determining sample size other than the need to collect as much useful data as can be gathered given the logistical constraints of budget, time, and accessibility. However, the study does have a sampling plan that has taken these factors into consideration. An additional factor to consider with qualitative studies is the possibility for incidental and unanticipated circumstances that may cause attrition and inadequate or poor responses. For this study, 40 sites will provide the most feasible sample size, and will offer the ability to uncover salient patterns of responses that address key research questions.
Our stratification process involves five site characteristics:
Region of the Country (Midwest, Northeast, South, West)
Locale (Urban, Suburban/Large town, Rural/Small town)
Center Size (smaller—serving 10-100 students; larger—serving more than 100 students)
Years of Operation (less than 4 years, 4 or more years)
Type of Center (school based, not school based)
There are 96 possible combinations of these characteristics (4 x 3 x 2 x 2 x 2 = 96), however 10 of the possible combinations have empty cells, resulting in 86 actual cells. Observed cell counts within the population of programs range from 2 (e.g., Midwest/suburban/smaller/< 4 years/not school-based)) to 131 (West/suburban/larger/4 or more years/school-based), with a median of 14. To allow more proportional selection, the following procedure will be used:
Cells with counts less than 100 are pooled with similar cells until the stratum size is 100 or greater.
Cells are combined across Type of Center, initially. Should the combined count still not reach 100, cells are combined across Years of Operation. If necessary, they are combined across Center Size.
Sites from within each stratum will be randomly selected at a ratio of 1:55.
Using this approach produces 12 strata, as shown in the table below.
Combinations of Possible Site Characteristics in Sample Selection
Cell number |
Stratum |
Region |
Locale |
Center size |
Years operating |
Center type |
Count |
Combined cell count |
Number to be selected |
1 |
1 |
Midwest |
Urban |
Larger |
4 or more |
Not school based |
15 |
189 |
3 |
2 |
Midwest |
Urban |
Larger |
4 or more |
School based |
121 |
|||
3 |
Midwest |
Urban |
Larger |
Less than 4 |
Not school based |
4 |
|||
4 |
Midwest |
Urban |
Larger |
Less than 4 |
School based |
49 |
|||
|
|
|
|
|
|
|
|
|
|
5 |
2 |
Midwest |
Urban |
Smaller |
4 or more |
Not school based |
19 |
154 |
3 |
6 |
Midwest |
Urban |
Smaller |
4 or more |
School based |
84 |
|||
7 |
Midwest |
Urban |
Smaller |
Less than 4 |
Not school based |
8 |
|||
8 |
Midwest |
Urban |
Smaller |
Less than 4 |
School based |
43 |
|||
|
|
|
|
|
|
|
|
|
|
9 |
3 |
Midwest |
Suburban/Large Town |
Larger |
4 or more |
Not school based |
3 |
122 |
2 |
10 |
Midwest |
Suburban/Large Town |
Larger |
4 or more |
School based |
37 |
|||
11 |
Midwest |
Suburban/Large Town |
Larger |
Less than 4 |
School based |
15 |
|||
12 |
Midwest |
Suburban/Large Town |
Smaller |
4 or more |
Not school based |
11 |
|||
13 |
Midwest |
Suburban/Large Town |
Smaller |
4 or more |
School based |
34 |
|||
14 |
Midwest |
Suburban/Large Town |
Smaller |
Less than 4 |
Not school based |
2 |
|||
15 |
Midwest |
Suburban/Large Town |
Smaller |
Less than 4 |
School based |
20 |
|||
|
|
|
|
|
|
|
|
|
|
16 |
4 |
Midwest |
Rural/Small Town |
Larger |
4 or more |
Not school based |
2 |
145 |
3 |
17 |
Midwest |
Rural/Small Town |
Larger |
4 or more |
School based |
36 |
|||
18 |
Midwest |
Rural/Small Town |
Larger |
Less than 4 |
Not school based |
2 |
|||
19 |
Midwest |
Rural/Small Town |
Larger |
Less than 4 |
School based |
20 |
|||
20 |
Midwest |
Rural/Small Town |
Smaller |
4 or more |
Not school based |
1 |
|||
21 |
Midwest |
Rural/Small Town |
Smaller |
4 or more |
School based |
39 |
|||
22 |
Midwest |
Rural/Small Town |
Smaller |
Less than 4 |
Not school based |
1 |
|||
23 |
Midwest |
Rural/Small Town |
Smaller |
Less than 4 |
School based |
44 |
|||
|
|
|
|
|
|
|
|
|
|
24 |
5 |
Northeast |
Urban |
Larger |
4 or more |
Not school based |
9 |
106 |
2 |
25 |
Northeast |
Urban |
Larger |
4 or more |
School based |
52 |
|||
26 |
Northeast |
Urban |
Larger |
Less than 4 |
Not school based |
7 |
|||
27 |
Northeast |
Urban |
Larger |
Less than 4 |
School based |
26 |
|||
28 |
Northeast |
Urban |
Smaller |
4 or more |
Not school based |
3 |
|||
29 |
Northeast |
Urban |
Smaller |
4 or more |
School based |
6 |
|||
30 |
Northeast |
Urban |
Smaller |
Less than 4 |
School based |
3 |
|||
|
|
|
|
|
|
|
|
|
|
31 |
6 |
Northeast |
Suburban/Large Town |
Larger |
4 or more |
School based |
10 |
121 |
2 |
32 |
Northeast |
Suburban/Large Town |
Larger |
Less than 4 |
Not school based |
2 |
|||
33 |
Northeast |
Suburban/Large Town |
Larger |
Less than 4 |
School based |
8 |
|||
34 |
Northeast |
Suburban/Large Town |
Smaller |
4 or more |
School based |
14 |
|||
35 |
Northeast |
Suburban/Large Town |
Smaller |
Less than 4 |
School based |
6 |
|||
36 |
Northeast |
Rural/Small Town |
Larger |
4 or more |
School based |
14 |
|||
37 |
Northeast |
Rural/Small Town |
Larger |
Less than 4 |
School based |
8 |
|||
38 |
Northeast |
Rural/Small Town |
Smaller |
4 or more |
School based |
36 |
|||
39 |
Northeast |
Rural/Small Town |
Smaller |
Less than 4 |
Not school based |
1 |
|||
40 |
Northeast |
Rural/Small Town |
Smaller |
Less than 4 |
School based |
22 |
|||
|
|
|
|
|
|
|
|
|
|
41 |
7 |
South |
Urban |
Larger |
4 or more |
Not school based |
9 |
237 |
4 |
42 |
South |
Urban |
Larger |
4 or more |
School based |
57 |
|||
43 |
South |
Urban |
Larger |
Less than 4 |
Not school based |
12 |
|||
44 |
South |
Urban |
Larger |
Less than 4 |
School based |
101 |
|||
45 |
South |
Urban |
Smaller |
4 or more |
Not school based |
16 |
|||
46 |
South |
Urban |
Smaller |
4 or more |
School based |
10 |
|||
47 |
South |
Urban |
Smaller |
Less than 4 |
Not school based |
13 |
|||
48 |
South |
Urban |
Smaller |
Less than 4 |
School based |
19 |
|||
|
|
|
|
|
|
|
|
|
|
49 |
8 |
South |
Suburban/Large Town |
Larger |
4 or more |
Not school based |
17 |
251 |
5 |
50 |
South |
Suburban/Large Town |
Larger |
4 or more |
School based |
56 |
|||
51 |
South |
Suburban/Large Town |
Larger |
Less than 4 |
Not school based |
8 |
|||
52 |
South |
Suburban/Large Town |
Larger |
Less than 4 |
School based |
84 |
|||
53 |
South |
Suburban/Large Town |
Smaller |
4 or more |
Not school based |
14 |
|||
54 |
South |
Suburban/Large Town |
Smaller |
4 or more |
School based |
35 |
|||
55 |
South |
Suburban/Large Town |
Smaller |
Less than 4 |
Not school based |
13 |
|||
56 |
South |
Suburban/Large Town |
Smaller |
Less than 4 |
School based |
24 |
|||
|
|
|
|
|
|
|
|
|
|
57 |
9 |
South |
Rural/Small Town |
Larger |
4 or more |
School based |
53 |
207 |
4 |
58 |
South |
Rural/Small Town |
Larger |
Less than 4 |
Not school based |
3 |
|||
59 |
South |
Rural/Small Town |
Larger |
Less than 4 |
School based |
62 |
|||
60 |
South |
Rural/Small Town |
Smaller |
4 or more |
Not school based |
4 |
|||
61 |
South |
Rural/Small Town |
Smaller |
4 or more |
School based |
28 |
|||
62 |
South |
Rural/Small Town |
Smaller |
Less than 4 |
Not school based |
11 |
|||
63 |
South |
Rural/Small Town |
Smaller |
Less than 4 |
School based |
46 |
|||
|
|
|
|
|
|
|
|
|
|
64 |
10 |
West |
Urban |
Larger |
4 or more |
Not school based |
7 |
230 |
4 |
65 |
West |
Urban |
Larger |
4 or more |
School based |
100 |
|||
66 |
West |
Urban |
Larger |
Less than 4 |
Not school based |
4 |
|||
67 |
West |
Urban |
Larger |
Less than 4 |
School based |
37 |
|||
68 |
West |
Urban |
Smaller |
4 or more |
Not school based |
6 |
|||
69 |
West |
Urban |
Smaller |
4 or more |
School based |
56 |
|||
70 |
West |
Urban |
Smaller |
Less than 4 |
Not school based |
1 |
|||
71 |
West |
Urban |
Smaller |
Less than 4 |
School based |
19 |
|||
|
|
|
|
|
|
|
|
|
|
72 |
11 |
West |
Suburban/Large Town |
Larger |
4 or more |
Not school based |
7 |
263 |
5 |
73 |
West |
Suburban/Large Town |
Larger |
4 or more |
School based |
131 |
|||
74 |
West |
Suburban/Large Town |
Larger |
Less than 4 |
Not school based |
5 |
|||
75 |
West |
Suburban/Large Town |
Larger |
Less than 4 |
School based |
34 |
|||
76 |
West |
Suburban/Large Town |
Smaller |
4 or more |
Not school based |
5 |
|||
77 |
West |
Suburban/Large Town |
Smaller |
4 or more |
School based |
61 |
|||
78 |
West |
Suburban/Large Town |
Smaller |
Less than 4 |
Not school based |
1 |
|||
79 |
West |
Suburban/Large Town |
Smaller |
Less than 4 |
School based |
19 |
|||
|
|
|
|
|
|
|
|
|
|
80 |
12 |
West |
Rural/Small Town |
Larger |
4 or more |
Not school based |
3 |
157 |
3 |
81 |
West |
Rural/Small Town |
Larger |
4 or more |
School based |
34 |
|||
82 |
West |
Rural/Small Town |
Larger |
Less than 4 |
School based |
33 |
|||
83 |
West |
Rural/Small Town |
Smaller |
4 or more |
Not school based |
6 |
|||
84 |
West |
Rural/Small Town |
Smaller |
4 or more |
School based |
28 |
|||
85 |
West |
Rural/Small Town |
Smaller |
Less than 4 |
Not school based |
3 |
|||
86 |
West |
Rural/Small Town |
Smaller |
Less than 4 |
School based |
50 |
|||
Totals |
|
|
|
|
|
|
2182 |
2182 |
40 |
Within each of the 12 strata, from 2 to 5 sites will be randomly picked, for a total of 40 sites. Should a selected center choose not to participate in the study, it will be replaced from the same stratum. We believe this procedure will provide a reasonably representative sample of programs from across the spectrum of 21st CCLC early childhood programs for this qualitative/descriptive study.
B.2.2 Case Study: Estimation Procedure
It is difficult to estimate the potential magnitude of nonresponse bias. As indicated above, if a selected center chooses not to participate in the study, it will be replaced from the same stratum.
B.2.3 Degree of Accuracy Needed
The goal of the case studies is to gather data about program services and activities to inform both the Implementation Report and the Implementation Guide. Therefore, we will not be using statistical analyses that require a high degree of precision.
B.2.4 Unusual Problems Requiring Specialized Sampling Procedures
None.
B.2.5 Use of Periodic Data Collection Cycles to Reduce burden
None.
Sites will be contacted first via e-mail, followed up by a phone call. The e-mails will provide information about the study and what is entailed in participation, reminding recipients that their participation is voluntary. The follow-up phone calls will reiterate this information and provide potential participants with the ability to ask further questions about what is entailed in participation. The phone calls will also provide an opportunity to schedule the site visits.
SEI and Children’s Institute pilot tested the Site Coordinator Interview form with up to 5 individuals in fall 2010. After pilot testing, each interview was streamlined with a goal of reducing administration time.
Roy Walker, M.B.A., Project Director
Synergy Enterprises, Inc.
8757 Georgia Avenue, Suite 1440
Silver Spring, MD 20910
(240) 485-1985
Sherri Lauver, Ph.D., Principal Investigator
Synergy Enterprises, Inc.
8757 Georgia Avenue, Suite 1440
Silver Spring, MD 20910
(585) 355-8506
Contact Information: Children’s Institute, Inc.
Dirk Hightower, Ph.D., Subcontract Manager
Children’s Institute, Inc.
274 North Goodman Street, Suite D103
Rochester, NY 14607
(585) 295-1000
Marjorie Allan
Children’s Institute, Inc.
274 North Goodman Street, Suite D103
Rochester, NY 14607
(585) 295-1000
Bohdan S. Lotyczewski
Children’s Institute, Inc.
274 North Goodman Street, Suite D103
Rochester, NY 14607
(585) 295-1000
Barnett, W., Lamy, C., and Jung, K. (2005). The Effects of State Prekindergarten Programs on Young Children’s School Readiness in Five States. New Brunswick, NJ: Rutgers University, National Institute for Early Education Research.
Bredekamp, S., and Copple, C. (1997). Developmentally Appropriate Practice in Early Childhood Programs (rev.). Washington, DC: National Association for the Education of Young Children.
Burchinal,
M., Vandergift, N., Pianta, R., and Mashburn, A. (2010). Threshold
Analysis of Association Between Child Care Quality and Child Outcomes
for Low-Income Children in Pre-K Programs. Early
Childhood Research Quarterly, 25(2):
166-176.
Campbell, F.A., Ramey, C.T., Pungello, E., Sparling, J., and Miller-Johnson, S. (2002). Early Childhood Education: Young Adult Outcomes From the Abecedarian Project. Applied Developmental Science,
6(1):
42-57.
Dynarski, M., James-Burdumy, S., Moore, M., Rosenberg, L., Deke, J., & Mansfield, W. (2003).
When schools stay open late: The national evaluation of the 21st Century Community
Learning Centers Program: Final Report. U.S. Department of Education, National
Center for Education Evaluation and Regional Assistance. Washington, DC: U.S.
Government Printing Office.
Dynarski, M., James-Burdumy, S., Moore, M., Rosenberg, L., Deke, J., & Mansfield, W. (2004).
When schools stay open late: The national evaluation of the 21st Century Community
Learning Centers Program: New findings. U.S. Department of Education, National
Center for Education Evaluation and Regional Assistance. Washington, DC: U.S.
Government
Printing Office.
Early, D.M., Bryant, D.M., Pianta, R.C., Clifford, R.M., Burchinal, M.R., Ritchie, S., Howes, C., and Barbarin, O. (2006). Are Teachers’ Education, Major and Credentials Related to Classroom Quality and Children’s Academic Gains in Pre-kindergarten? Early Childhood Research Quarterly, 21: 174-195.
Early, D.M., Maxwell, K.L., Burchinal, M., Bender, R.H., Ebanks, C., Henry, G.T., and Zill, N. (2007). Teachers’ Education, Classroom Quality, and Young Children’s Academic Skills: Results From Seven Studies of Preschool Programs. Child Development, 78(2): 558-580.
Epstein, D. (2009). The Changing Landscape: National Trends in Quality Standards in State-Funded Prekindergarten Initiatives. New Brunswick, NJ: Rutgers University, National Institute for Early Education Research. Retrieved from www.nieer.org
Hamre, B., and Pianta, R. (2007). Learning Opportunities in Preschool and Early Elementary Classrooms. In R. Pianta, M. Cox, and K. Snow (Eds.), School Readiness and the Transition to Kindergarten in the Era of Accountability (pp. 49-83). Baltimore, MD: Brookes.
Harms, T., Clifford, R., and Cryer, D. (1998). Early Childhood Environment Rating Scale–Revised. New York: Teachers College Press.
Huffman, L.R., and Speer, P.W. (2000). Academic Performance Among At-Risk Children: The Role of Developmentally Appropriate Practices. Early Childhood Research Quarterly, 15(2): 176-184.
Lambert, R., Abbot-Shim, M., and Sibley, A. (2006). Evaluating the Quality of Early Childhood Educational Settings. In B. Spodek and O. Saracho (Eds.), Handbook of Research on the Education of Young Children (pp. 457-475). Mahwah, NJ: Erlbaum.
Lieber, J., Butera, G., Hanson, M., Palmer, S., Horn, E., Czaja, C., Diamond, K., Goodman-Jansen, G., Daniels, J., Gupta, S., and Odom, S. (2009). Factors That Influence the Implementation of a New Preschool Curriculum: Implications for Professional Development. Early Education and Development, 20: 456-481.
Loeb,
S., Fuller, B., Kagan, S., and Carrol, B. (2004). Child care in poor
communities: Early learning effects of type, quality, and stability.
Child
Development, 75:
47-65.
Love, J.M., Harrison, L., Sagi-Schwartz, A., van Ijzendoorn, M.H., Ross, C., Ungerer, and Chazan-Cohen, R. (2003). Child Care Quality Matters: How Conclusions May Vary With Context. Child Development, 74(4): 1021-1033.
Love, J., Tarullo, L., Raikes, H., and Chazan-Cohen, R. (2006). Head Start: What Do We Know About Its Effectiveness? What Do We Need to Know? In K. McCartney and D. Phillips (Eds.), Handbook of Early Childhood Development (pp. 550-575). Malden, MA: Blackwell.
National Association for the Education of Young Children (1997). Position Statement: Developmentally Appropriate Practice in Early Childhood Programs Serving Children From Birth Through Age 8. Washington DC: Author.
NICHD Early Child Care Research Network. (1996). Characteristics of Infant Child Care: Factors Contributing to Positive Caregiving. Early Childhood Research Quarterly, 11: 269-306.
NICHD Early Child Care Research Network. (2000a). Child Care and Children’s Peer Interaction at 24 and 36 Months: The NICHD Study of Early Child Care. Child Development, 72(5): 1478-1500.
NICHD Early Child Care Research Network. (2000b). The Relation of Child Care to Cognitive and Language Development. Child Development, 71(4): 960-980.
NICHD Early Child Care Research Network. (2002). Child-Care Structure → Process → Outcome: Direct and Indirect Effects of Child-Care Quality on Young Children’s Development. Psychological Science, 13(3): 199-206.
NICHD Early Child Care Research Network. (2003). Does Quality of Child Care Affect Child Outcomes at Age 4 1/2? Developmental Psychology, 39(3): 451-469.
NICHD Early Child Care Research Network. (2004). Type of Child Care and Children’s Development at 54 Months. Early Childhood Research Quarterly, 19(2): 203-230.
Peisner-Feinberg, E.S., Burchinal, M.R., Clifford, R.M., Culkin, M.L., Howes, C., Kagan, S.L., and Yazejian, N. (2001). The Relation of Preschool Child-Care Quality to Children’s Cognitive and Social Development Trajectories Through Second Grade. Child Development, 72(5): 1534-1553.
Penuel,
W., and McGhee, R. Jr. (2010). 21st Century Community Learning
Centers: Descriptive study of program practices. Retrieved from
http://policyweb.sri.com/cep/publications/SRI_21stCenturyCommunityLearningCenters_FinalReport2010.pdf
Pianta, R.C. (1999). Enhancing Relationships Between Children and Teachers. Washington, DC: American Psychological Association.
Pianta, R.C., Howes, C., Burchinal, M., Bryant, D., Clifford, R., Early, D., and Barbarin, O. (2005). Features of Pre-kindergarten Programs, Classrooms, and Teachers: Do They Predict Observed Classroom Quality and Child-Teacher Interactions? Applied Developmental Science, 9(3): 144-159.
Pianta, R., LaParo, K., and Hamre, B. (2007). Classroom Scoring Assessment System (CLASS): K-3. Baltimore, MD: Brookes Publishing Co.
Ramey, C. (2000). Helping Children Get Started Right: The Benefits of Early Childhood Intervention. In K. Bogenschneider and J. Mills (Eds.), Helping Poor Kids Succeed: Welfare, Tax, and Early Intervention Policies (Wisconsin Family Impact Seminar Briefing Report No. 14, pp. 21-28). Madison, WI: University of Wisconsin Center for Excellence in Family Studies.
Reynolds, A., Temple, J., Robertson, D., and Mann, E. (2002). Age 21 Cost-Benefit Analysis of Title I Chicago Child-Parent Centers. Education Evaluation and Policy Analysis, 24: 267-303.
Schweinhart, L., Montie, J., Xiang, Z., Barnett, W., Belfield, C.R., and Nores, M. (2005). Lifetime Effects: The HighScope Perry Preschool Study Through Age 40. Ypsilanti, MI: HighScope Press.
Stipek, D. (2004). Teaching Practices in Kindergarten and First Grade: Different Strokes for Different Folks. Early Childhood Research Quarterly, 19: 548-568.
Vandell, D.L. (2004). Early Child Care: The Known and the Unknown. Merrill-Palmer Quarterly, 50(3): 387-414.
Weikart, D. (1998). Changing Early Childhood Development Through Educational Intervention. Preventive Medicine, 27: 233-237.
Whitehurst, G., Zevenbergen, A., Crone, D., Schultz, M., Velting, O., and Fischel, J. (1999). Outcomes of an Emergent Literacy Intervention From Head Start Through Second Grade. Journal of Educational Psychology, 91: 261-272.
Zaslow, M., Halle, T., Martin, L., Cabrera, N., Calkins, J., Pitzer, L., and Margie, N.G. (2006). Child Outcome Measures in the Study of Child Care Quality. Evaluation Review, 30(5): 577-610.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Authorised User |
File Modified | 0000-00-00 |
File Created | 2021-02-01 |