Scope Ssb_final_3-27-18

SCOPE SSB_FINAL_3-27-18.docx

Study of Coaching Practices in Early Care and Education Settings (SCOPE)

OMB: 0970-0515

Document [docx]
Download: docx | pdf


Study of Coaching Practices in Early Care and Education Settings



OMB Information Collection Request

New Collection




Supporting Statement

Part B

March 2018


Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


OPRE Points of Contact:

Wendy DeCourcey, Ph.D. (Federal Project Officer)

Tracy Carter Clopet, Ph.D. (Contract Project Specialist)

This page has been left blank for double-sided copying.

CONTENTS

B1. Respondent Universe and Sampling Methods 1

Target population 1

Design of the sample 1

Sample size and precision of key estimates for the descriptive study 4

Expected item nonresponse rate for critical questions 5

B2. Procedures for Collection of Information 5

B3. Methods to Maximize Response Rates and Deal with Nonresponse 7

Expected Response Rates 7

Dealing with Nonresponse 7

Maximizing Response Rates 7

B4. Tests of Procedures or Methods to be Undertaken 9

B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 9

References 11


TABLES

Table B.1. Selection factors guiding SCOPE’s purposive sampling approach within states and their information sources 3

Table B.3. Sample sizes for ECE program administrators, coaches, and teachers across ECE settings 4

Table B.3. MDDs for subgroup comparisons when incorporating covariates in the analysis 5

Table B.4. Expected response rates and number of responses, by data source 7

Contents of OMB Information Collection Request for SCOPE

Supporting Statement Part A

Supporting Statement Part B

Appendices

Appendix A. Coaching session observation protocol

Appendix B. 60-Day Federal Register Notice and Comments

Appendix C. Study recruitment materials

Appendix D. Mathematica Confidentiality Pledge


Attachments (Study Instruments)

Attachment 1. State coaching informant interview protocol

Attachment 2. ECE setting eligibility screener

Attachment 3. Center director survey

Attachment 4. Coach survey

Attachment 5. Teacher/FCC provider survey

Attachment 6. Center director case study semi-structured interview protocol

Attachment 7. Coach case study semi-structured interview protocol

Attachment 8. Teacher/FCC provider case study semi-structured interview protocol

Attachment 9. Coach supervisor case study semi-structured interview protocol



B1. Respondent Universe and Sampling Methods

The Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval to collect descriptive information for the Study of Coaching Practices in Early Care and Educational Settings (SCOPE) project. The goal of this information collection is to identify how professional development coaching practices for early care providers are implemented and vary in early care and education (ECE) classrooms supported by Head Start grants or serving children who receive Child Care and Development Fund (CCDF) subsidies. First, we will collect information through state coaching informant interviews taking place in different states spring 2018 (pending OMB approval) from state-level coaching entities. Second, we will conduct one round of survey data collection in fall 2018 through winter 2019. Third, we will conduct case studies that include semi-structured interviews to better understand factors that influence the coaching approaches identified through the survey data collection. The semi-structured interviews will occur in fall 2019 through winter 2020.

Proposed data collection activities include: interviews with state-level coaching informants (45) to inform selection of states for the study; web-based surveys with ECE center directors (60), teachers (an average of two lead teachers per center, for a total of 120); FCC providers (40); and coaches (90) for the descriptive study; and case studies in 12 sites (centers and FCCs). The case studies will include interviews with ECE center directors, coaches, coach supervisors, teachers, and FCC providers (48 total interviews) and surveys with teachers and FCC providers (12); and we will conduct observations of coach-teacher/FCC provider interactions in the case studies.

Target population

The target population for this study includes family child care (FCC) providers, center directors and classroom teachers, professional development coaches, and coach supervisors. The participating FCC providers and center teachers must be receiving coaching for the purpose of improving classroom practice. The settings in which the directors, teachers, and providers work must be receiving Head Start funding or serving children who receive Child Care and Development Fund (CCDF) subsidies, and they must also be serving preschool-age children.

Design of the sample

To meet the objectives for the study, we will use a multistep sampling and recruiting approach that includes (1) selecting states in which to carry out data collection activities and (2) selecting and recruiting coaching providers and coaches, center directors and teachers, and FCC providers within states to participate in the descriptive study and/or case studies. Sampling will be purposive in both steps to ensure variation in the state policy context, setting type, and setting funding sources, as well as funders and providers of coaching, coaching features, and key characteristics of coaches and teachers.

The first sampling step—selecting states—will proceed in two phases. Across those two phases, we will identify seven states that have one or more defined, classroom-based (i.e., focused on improving teacher practice in the classroom) coaching models offered to centers and FCC homes that are supported by Head Start grants or serving children who receive CCDF subsidies. A defined coaching model is one in which there is documentation describing the model or process that coaches follow when interacting with providers (e.g., a coaching manual or protocol). As described below, we will also seek to identify and include in SCOPE less defined coaching models, but having at least one defined model makes a state eligible for SCOPE. States should also have leadership within the CCDF agency, Head Start grantee(s), and/or key coaching organizations that are interested in the goals of this study, and have some type of coaching in all settings of interest (FCCs, child care centers, and Head Start centers).

In the first phase of state selection, we will narrow the list of potential states by (1) seeking nominations from 3-6 experts and stakeholders with knowledge about classroom-coaching efforts occurring across the United States and (2) by reviewing publicly available information on coaching in states (for example, CCDF Plans, the Quality Performance Report (QPR), the QRIS Compendium, public websites). Based on what we learn about the variety of coaching approaches within and across states, we will narrow the list of states to about nine.

The second phase of state selection will take place after OMB approval. Using the state coaching informant interview protocol (Attachment 1), we will reach out directly to coaching informants in each of the nine states being considered for the study. We will begin by contacting the CCDF administrator or Head Start State Collaboration Office director in each of the nine states to describe the SCOPE project and learn about coaching in the state. Based on those initial conversations, and depending on the state’s structure for providing coaching, we will contact additional coaching informants (in particular, those who fund or provide coaching) to learn more about the different kinds of coaching offered in the state, who offers the coaching, who are the intended recipients of the coaching, and whether there is administrative data available about coaching recipients in the state. Using the information gathered through these discussions, we will identify seven states for inclusion in the study.

Once we have selected our state sample, we will move to the second step of sampling: identifying the sample of coaches, center directors and teachers, and FCC providers for the descriptive study. Table B.1 provides a list of the selection factors guiding the purposive sampling approach within states. We will use two strategies to identify our sample:

  • First, we will obtain existing administrative data from agencies and organizations that fund or provide coaching on the characteristics of the coaches and on the ECE settings receiving coaching, including whether those settings receive funding from Head Start or serve children who receive CCDF subsidies, and then use these data to identify coaches, center directors and teachers, and FCC providers for the study. We will request administrative data in its existing format. We will not request specific data elements or for the administrative data to be presented in any specific format.

  • Second, if coaching entities within the states are not able or willing to share administrative data, we will ask them to distribute information about SCOPE to their coaches, and also possibly to the center directors, teachers, and FCC providers who are receiving coaching using the flyers and fact sheets in Appendix C. In addition, we will seek nominations for coaches and ECE settings to include in the study from informants in each state.

Table B.1. Selection factors guiding SCOPE’s purposive sampling approach within states and their information sources

Sample selection factor

How the selection factors inform sampling

Setting type (center versus FCC)

Eligibility criterion and primary selection factor: evenly distribute within and across the seven states

Setting funding source (Head Start versus CCDF)

Eligibility criterion primary selection factor: evenly distribute across the seven states, and aim for approximately even distribution within states

Coaching providers and funding sources

Primary selection factor: based on information gathered during the state selection process, develop targets for different types of coaching providers and funding sources across the seven states or the full sample; within each state, target a minimum of 3 coaching providers

Coaching models: defined versus less defined

Primary selection factor: based on information gathered during the state selection process, develop targets for the number of settings receiving more versus less defined coaching within each state

Coaching models: including various features

Eligibility criterion and secondary selection factor: track during state, coaching provider, and center or FCC selection and screening to ensure variation across the seven states or the full sample

Coach characteristic: tenure with current coaching provider (employer)

Eligibility criterion and secondary selection factor: track during coaching provider and center or FCC selection and screening to ensure variation across the seven states or the full sample

Teacher/FCC provider characteristic: experience in ECE

Secondary selection factor: track during coaching provider and center or FCC selection and screening to ensure variation across the seven states or the full sample

CCDF = Child Care Development Fund; ECE = early childhood education; FCC = family child care; OCC = Office of Child Care; OHS = Office of Head Start; SCOPE = Study of Coaching Practices in Early Care and Education Settings.

Once we have a list of potentially eligible ECE settings, we will contact them to screen for eligibility (Attachment 2). To be eligible, the ECE setting must be (1) currently operating, (2) receiving funding from Head Start or serving children receiving child care subsidies, and (3) receiving classroom-based coaching for teachers of preschoolers/FCC providers caring for preschoolers. If eligible, we will assess the setting’s willingness to participate in the study and confirm or collect information about the preschool teachers and FCC providers, their coaches, and the coaching that takes place in their setting (that is, on the sample selection factors identified in Table B.1).

Using all of the information we collected during state selection and the ECE setting screening process, we will then purposively select a final sample to recruit for the descriptive study. We will aim for variation across the primary and secondary sample selection factors in Table B.1. At minimum, we expect to include at least three coaching providers/coaching approaches in each state, including both defined and less-defined approaches and reflecting different funders of coaching. We will also recruit coach-teacher and coach-FCC provider pairs. If we identify more settings within a state that are eligible for our study or more coaches and/or teachers within those settings than we aim to recruit, we will randomly select participants.

We will use a multistep approach to select case study sites, using data from the descriptive web survey to identify the more common coaching approaches. We will include 12 sites (both centers and FCCs) where a subset of these common approaches to coaching are being used. To understand the influence of various contextual factors on implementing coaching, we will work to minimize other variables that can influence coaching implementation. The approach of coaching is one of those important sources of variation. We will analyze data from the descriptive web survey to identify common approaches of coaching to include in the case study.

Sample size and precision of key estimates for the descriptive study

Table B.3 shows the expected sample size for the descriptive study (web surveys) by setting type and funding source, and table B.4 shows the results of power analyses. As shown in table B.3, we expect to recruit 60 center-based settings and 40 FCC homes. For the center-based settings, we expect to conduct surveys with 60 center directors, 120 teachers, and 60 coaches. For the FCC homes, we expect to conduct surveys with 40 FCC providers and 30 coaches.1

Table B.3. Sample sizes for ECE program administrators, coaches, and teachers across ECE settings


Head Start center

CCDF center

FCC

Total

Total centers/FCCs

30

30

40

100

Total respondents per center/FCC





Center directors

30

30

n.a.

60

Coaches

30

30

30

90

Center teachers/FCC providers

60

60

40

160

n.a. = not applicable.

Our analyses will likely include covariates, and we can use the increased precision of estimates conditional on covariates to increase the power of the sample to detect subgroup differences. Many of the analyses involve estimating coaching features conditional on key background and context variables, such as coach and teacher/FCC provider background; the overall content or goal of coaching (for example, general quality improvement or curriculum implementation); program context; and community context. Incorporating the greater precision gained by using covariates in the analysis yields smaller minimum detectable differences (MDDs) for the proposed design.

Table B.3 shows the MDDs for subgroup comparisons when incorporating covariates in the analysis. Overall full-sample estimates for teachers and FCC providers have a half-width confidence interval of 0.15, meaning that the estimate has a 95 percent confidence interval of plus or minus 15 percentage points. Overall full-sample estimates for coaches (or coach–teacher/FCC provider pairs) have a half-width confidence interval of 0.18. To assess the adequacy of the sample size when incorporating covariates in the analysis, we focused on MDDs for the comparisons between coach–FCC provider and coach–teacher pairs and between FCC providers and teachers; these comparisons correspond to a 33 percent subgroup (FCC providers) and a 67 percent subgroup (teachers). As shown in Table B.2, the MDD for comparisons of coach-FCC provider and coach-teacher pairs is 0.56, and for comparisons of teachers and FCC providers is 0.47, still large differences.

An MDD of 0.2 to 0.3 for comparisons of FCC providers with center-based teachers (or of coach–FCC provider and coach–teacher pairs) would be preferable, because differences of this size are large enough to be meaningful in coaching practices and interactions across coaches and teachers, yet are small enough to be reasonably expected between groups. The SCOPE sample is not powered to detect differences of this magnitude. Therefore, we will further explore and discuss findings that are substantive even if not detectable with this sample size. Given the goals of SCOPE (in particular, to describe on-the-ground coaching), this is a meaningful and reasonable approach.

Table B.3. MDDs for subgroup comparisons when incorporating covariates in the analysis


Sample size by type of ECE setting

Half-width confidence interval

MDD 50% subgroups

MDD
33% versus
67% subgroups

Head Start center

CCDF center

Family child care

Programs

30

30

40

n.a.

n.a.

n.a.

Coaches

30

30

30

0.18

0.52

0.56

Teachers/FCC providers

60

60

40

0.15

0.43

0.47

Notes: The calculations assume a power of 0.80 and a significance level for two-tailed tests of 0.05. The estimates assume intraclass correlations of 0.2 between coaches; 0.2 within coaches between programs; and 0.6 within program, between teachers; and an R2 of 0.40 due to including covariates in the model.

CCDF = Child Care and Development Fund; ECE = early care and education.

n.a. = not applicable.

Expected item nonresponse rate for critical questions

This data collection does not contain any especially critical questions that would require follow-up if missing. Furthermore, based on our experience with collecting data from center and FCC settings, we expect a very low item nonresponse rate (5 percent or less) in general. Although some of the demographic questions, such as those concerning race and income, may garner higher item nonresponse, none of these are critical.

B2. Procedures for Collection of Information

Information collection will proceed in several steps. First, we will seek to collect existing information and documents on coaching taking place in states from state-level coaching informants and to obtain administrative data on coaching and coaching participants in those states (immediately after OMB approval). Based on this information gathering, we will select seven states to participate in the study and identify ECE settings potentially eligible to participate. We will then conduct eligibility screening calls with ECE settings to determine who to recruit for the descriptive study. For the descriptive study data collection we will use web-based surveys for ECE center directors, coaches, teachers, and FCC providers. These data collection activities will be carried out in winter 2018 through winter 2019 (after OMB approval). Finally, as part of the case study data collection we will conduct semi-structured interviews with ECE center directors, teachers, FCC providers, coaches, and coach supervisors in twelve ECE settings representing different contexts (Head Start and CCDF centers, and FCC homes), content of coaching approaches (for example, general quality improvement, curriculum or practice implementation), and providers of coaching, as well as varied backgrounds of coaches and teachers. Site visit data collection for the case studies would occur from fall 2019 through winter 2020 (after OMB approval).

Below, we outline the procedures for each of the data collection instruments. Additional information about procedures are included in section B.3: Methods to Maximize Response Rates and Deal with Nonresponse. The instruments used in SCOPE reflect the conceptual framework and research questions for this round (see Section A2 in Supporting Statement Part A). We drew some items from existing valid and reliable measures and developed new items when needed. The survey instruments (Attachments 3-5) are annotated to identify sources of questions from existing studies and those we developed for this study. The recruitment materials (Appendix C) are similar to those used in previous studies for this type of respondent population.

State coaching informant interview protocol (Attachment 1). We will conduct a 1-hour interview with state administrators knowledgeable about coaching and representatives of coaching funders and providers. These interviews will be conducted to obtain information about coaching models to determine the range of coaching occurring in states and what center-based classrooms and FCC homes serving preschool-age children and supported by Head Start grants or serving children who receive CCDF subsidies may be receiving classroom coaching. We will also collect information on state QRIS, CCDF program policies and regulations, licensing standards, and coaching organizations supported by state QRIS or CCDF or other sources. We will conduct the interviews by telephone.

ECE setting eligibility screener (Attachment 2). We will conduct a 15-minute telephone interview to gather and/or confirm information about the ECE setting, the coaching taking place in that setting, and the characteristics of participants in the coaching process. These calls will also assess willingness to participate in the study.

Center director survey (Attachment 3). We will conduct a 30-minute web-based survey with sampled center directors. We will attempt to follow up by telephone to remind directors to complete the survey and offer them the option of completing it by telephone.

Coach survey (Attachment 4). We will conduct a 30-minute web-based survey with sampled coaches who provide professional development coaching to ECE teachers and FCC providers. We will attempt to follow up by telephone to remind coaches to complete the survey and offer them the option of completing it by telephone.

Teacher/FCC provider survey (Attachment 5). We will conduct a 35-minute web-based survey with ECE teachers and FCC providers. We will attempt to follow up by telephone to remind teachers and FCC providers to complete the survey and offer them the option of completing it by telephone.

Coaching session observations (Appendix A) and semi-structured interviews with ECE center directors, FCC providers, teachers, coaches, and coach supervisors (Attachments 6 to 9). We will conduct observations of coaching feedback sessions between coaches and ECE teachers and FCC providers during in-person visits. During the in-person visits, we will also schedule and conduct 30 to 90-minute in-person interviews with ECE center directors (90 minutes), FCC providers and teachers (60 minutes), coaches (60 minutes), and coach supervisors (30 minutes). Interviews will be scheduled at times convenient for the interviewees.

B3. Methods to Maximize Response Rates and Deal with Nonresponse

Expected Response Rates

Through the various data collection efforts, the study team expects high response rates for the surveys: 80 percent for teachers and FCC providers, 90 percent for coaches and 95 percent for center directors. We expect 100 percent response rates for the qualitative interviews and observations in 12 sites with center directors, coaches, coach supervisors, teachers, and FCC providers. Table B.4 provides expected response rates and expected number of responses for each study instrument. These response rates are at or above those that OMB recommends to minimize nonresponse bias.

Table B.4. Expected response rates and number of responses, by data source

Data source

Number of consented respondents

Expected response rate (percentage)

Expected number of responses

1. Teacher/FCC provider survey

212

80

173

2. Coach survey

100

90

90

3. Center director survey

64

95

60

4. Case study semi-structured interview with ECE program administrators

12

100

12

5. Case study semi-structured interview with coaches

12

100

12

6. Case study semi-structured interview with teachers

12

100

12

7. Case study semi-structured interviews with coach supervisors

12

100

12

Dealing with Nonresponse

On most survey instruments, past experience working with ECE settings and using similar honoraria suggests we can expect high response rates (80 percent or more) and low item nonresponse (5 percent or less). We plan to implement web versions of the center director survey, coach survey, and teacher/FCC provider survey, which will make completing them easier for respondents. The web-based survey will not allow respondents to enter out-of-range or inconsistent responses. Weekly reviews of web survey data will allow us to identify potential errors and follow-up with respondents prior to the end of data collection. We will also offer participants the choice to respond to the survey by phone if they prefer.

Maximizing Response Rates

Past research studies of ECE settings demonstrated an established, successful record of gaining program cooperation and obtaining high response rates. To achieve high response rates, we will continue to use the procedures that have worked well on projects such as the Quality of Caregiver-Child Interactions for Infants and Toddlers (Q-CCIIT) (0970-0392), the Head Start Family and Child Experiences Survey (FACES 2009) (0970-0151), and the Early Head Start Family and Child Experiences Survey (Baby FACES 2009) (0970-0354) Assessing the Implementation and Cost of High Quality Early Care and Education (ECE ICHQ) (0970-0499). For the descriptive study, we will use multimodal approaches (web survey with optional telephone survey administration; email, hard-copy). These approaches will help ensure a high level of participation.

To maximize response rates for this information collection, we will take the following steps:

  • Recruiting ECE centers, FCCs and coaches. By the time we have completed the screening process, we will have built a strong foundation for recruiting. Through screening we will already have established connections with potential study participants (center directors and FCC providers). During recruiting, we will build on our initial connections. We will first send advance letters to ECE center directors, FCC providers, and coaches and follow up with telephone calls to share information about the study, confirm that settings meet eligibility requirements, and request their participation, following protocols and using recruitment materials (Appendix C). Staff experienced in recruiting ECE settings will conduct the telephone calls.

For the case study semi-structured interviews and classroom observations, we will recruit center directors, coach supervisors, coaches, FCC providers, and teachers connected to 12 ECE settings. Since we will be on site for the case studies, and a subset of the interviewees may have participated in the web surveys, we anticipate 100 percent participation in the semi-structured interviews and observations.

  • Advance notification for the web-based surveys. ECE center directors, FCC providers, coaches, and teachers will receive an advance email notification inviting them to complete their survey (see materials in Appendix C). The advance email includes a brief overview of the study purpose, a description of the data collection activity in which we are asking them to participate, and an estimate of the amount of time required to complete the activity. It will also include information needed to complete the survey (such as log-in credentials). Respondents will also receive a number they can call should they have any questions about their participation in the study.

  • Reminder notifications. Over the course of the data collection period, we will send weekly email reminders to those who are invited to complete the survey (Appendix C); we will also make up to four reminder calls to nonresponders. We will make the first call four weeks after the beginning of data collection. We will be courteous but persistent in our follow up with participants who do not respond quickly to our attempts to reach them.

  • Trained and experienced data collection staff. We will use staff experienced in collecting data from ECE settings. Staff assigned to the study will participate in extensive project-specific training to ensure they are ready to respond effectively to respondents’ questions and conduct the survey by phone if requested. The training will also focus on developing skills for securing respondents’ cooperation and averting and converting refusals.

  • Flexibility in language of administration. Spanish versions of the teacher/FCC provider survey will be available to Spanish-speaking teachers and FCC providers. During telephone contact, recruiters will identify Spanish-speaking respondents and connect them to speak with a Spanish-language recruiter. Mathematica employs staff that have experience conducting recruiting in Spanish. We can also conduct the survey as a telephone interview in other languages as needed.

  • Honoraria. We will provide a $20 gift card to center directors, teachers, and coaches who complete the surveys, a $40 gift card to FCC providers who complete the surveys, and $20 to all respondents who participate in the case study semi-structured interviews. To develop honoraria amounts we considered average estimated salaries across members of the target population as well as experiences in other data collections with the same target population. Respondents will need to complete the descriptive study data collection activities outside of working hours because of their child care responsibilities. We have identified a higher honorarium for FCC providers for the descriptive study because they tend to work longer hours than center teachers and may not have the support of another adult in their setting (Moiduddin et al. 2015). The case study semi-structured interviews with ECE center directors, FCC providers and teachers, coaches, and coach supervisors will occur at the ECE setting during the work day. This will require disrupting participants’ schedules and may require other staff to cover for a respondent participating in the semi-structured interview.

B4. Tests of Procedures or Methods to be Undertaken

Many of the scales and items in the proposed ECE center director survey, coach survey and teacher/FCC provider survey were selected from existing surveys of ECE workforce members and/or from valid and reliable measures that have good psychometric properties with populations similar to the SCOPE sample. The study team also developed new items for measuring constructs for which existing measures are not currently available. These items have drawn ideas for phrasing and language from prior research on Head Start and child care. The survey instruments and forms (Attachments 3-5) are annotated to identify sources of questions from existing studies as well as questions we developed for this study. In addition, we conducted cognitive interviews and pretests of the surveys in winter 2017/2018 (each with nine respondents or fewer) to: ensure that questions are understandable, use language familiar to respondents, and are consistent with the concepts they aim to measure; identify typical instrumentation problems such as question wording and incomplete or inappropriate response categories; measure the response burden; and confirm there are no unforeseen difficulties in administering the instrument. In addition, the study team will carefully review the web-based instruments to ensure the flow through the instrument is working properly.

B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Mathematica Policy Research and consultants Chrishana Lloyd, Ph.D., University of Texas Health Science Center at Houston, and Child Trends are conducting this project under contract number HHSP2332015000351. Mathematica developed the plans for statistical analyses for this study. To complement the study team’s knowledge and experience, we also consulted with a technical working group of outside experts, as described in Section A8 of Supporting Statement Part A.

Wendy DeCourcey, Ph.D.
Project officer
Office of Planning, Research, and Evaluation

Nina Philpsen Hetzner, Ph.D.
Contract Project specialist
Office of Planning, Research, and Evaluation

Tracy Carter Clopet, Ph.D.
Contract Project specialist
Office of Planning, Research, and Evaluation


Emily Moiduddin, Ph.D.
Project director
Mathematica Policy Research

Sally Atkins-Burnett, Ph.D.
Co-principal investigator
Mathematica Policy Research

Elizabeth Cavadel, Ph.D.
Deputy project director
Mathematica Policy Research

Tim Bruursema, B.A.
Survey director
Mathematica Policy Research

Nikki Aikens, Ph.D.
Measurement task lead

Mathematica Policy Research

Barbara Carlson, M.A.
Senior statistician

Mathematica Policy Research



References

Aldenderfer, Mark S., and Roger K. Blashfield. “Cluster Analysis.” Sage University paper series on quantitative applications in the social sciences, no. 07-044. Newbury Park, CA: Sage, 1984.

Coffey, A., and P. Atkinson. Making Sense of Qualitative Data: Complementary Research Strategies. Thousand Oaks, CA: Sage Publications, Inc., 1996.

Milligan, Glenn W. “Clustering Validation: Results and Implications for Applied Analyses.” In Clustering and Classification, pp. 341–375. Edited by P. Araabie, L.J. Hubert, and G. De Soete. Rivers Edge, NJ: World Scientific, 1996.

Ritchie, Jane, and Liz Spencer. “Qualitative Data Analysis for Applied Policy Research.” In The Qualitative Researcher’s Companion, pp. 305–330. Edited by Michael Huberman and Matthew B. Miles. Thousand Oaks, CA: Sage Publications, 2002.

1 These numbers do not include the 12 surveys that teachers and FCC providers will be doing as part of the case studies, as those data will not be integrated into descriptive study analyses.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleStudy of Coaching Practices in Early Care and Education Settings OMB Supporting Statement Part B
SubjectOMB
AuthorEmily Moiduddin
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy