ITTC Supporting Statement B Draft 2 clean

ITTC Supporting Statement B Draft 2 clean.docx

OPRE Study: Infant and Toddler Teacher and Caregiver Competencies Study [Descriptive Study]

OMB: 0970-0569

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes






Infant and Toddler Teacher and Caregiver Competencies Study


OMB Information Collection Request

0970—New Collection





Supporting Statement

Part B



February 2021






Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers:

Jenessa Malin

Kathleen Dwyer


Part B



B1. Objectives

Study Objectives

The Infant and Toddler Teacher and Caregiver Competencies (ITTCC) study has three primary objectives:

  1. To identify promising practices and lessons learned related to implementing infant and toddler (I/T) teacher and caregiver competency frameworks

  2. To identify promising practices and lessons learned related to assessing I/T teacher and caregiver competencies

  3. To learn about how I/T teacher and caregiver competency frameworks have helped and can help build the capacity of the I/T care and education workforce and support quality improvement

We define a competency as a piece of knowledge, a skill, or an attribute essential to the practice of teaching and caring for infants and toddlers, and a competency framework as a compilation of competencies. To achieve the study’s objectives, we will examine the implementation of competency frameworks and assessment of competencies in up to seven purposively selected case studies. Each case study will focus on a specific competency framework targeted to I/T teachers and caregivers in group settings (center-based settings and family child care homes).

The case studies will examine implementation and assessment at (1) the system level (that is, among those charged with creating a structure for and supporting implementation in states, institutions of higher education, and/or professional organizations); and (2) the program level (that is, in the center-based settings and family child care [FCC] homes in which I/T teachers and caregivers work). We will purposively select cases—with each case focusing on one competency framework—and respondents within cases to provide lessons relevant to the range of approaches currently used to implement and assess competencies at the system and program levels.

We will collect information on how competency frameworks have been developed and implemented; how competencies are assessed; how program directors, center directors, FCC providers, and teachers and caregivers use competency frameworks; key lessons related to implementing competency frameworks and assessing competencies; and perspectives on how competencies can help build the capacity of the I/T workforce and support quality improvement.

Generalizability of Results

This study seeks to present an internally valid description of the implementation of competency frameworks and assessment of competencies for up to seven purposively selected cases, not to promote statistical generalization to different sites. Publications resulting from the study will acknowledge this limitation.



Appropriateness of Study Design and Methods for Planned Uses

We have proposed a purposive case selection approach and qualitative approaches to collecting data, as these are the optimal methods for achieving the study’s objectives. A purposive sample will ensure we include cases relevant to (and respondents with perspectives on) the range of approaches currently used to implement competency frameworks in the early care and education (ECE) field. Thus, we can identify lessons learned for the range of approaches used in the field. Because this study aims to learn about approaches, processes, challenges, and facilitators to implementing competency frameworks and assessing competencies, qualitative methods—semistructured interviews and document reviews—will promote in-depth examination of constructs of interest, using flexible instruments that can respond to local conditions. The target population includes individuals involved with implementing competency frameworks at the system or program level; perspectives from both of these levels will help examine whether the structures in place for helping to implement the competency framework are in line with the preferences and needs of the intended users across the ECE system.

As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information. The data collected are not intended to be representative. This study does not include an impact evaluation and will not be used to assess participants’ outcomes. All publicly available products associated with this study will clearly describe key limitations.


B2. Methods and Design

Target Population

For each of the seven selected study cases, the target population includes individuals involved with implementing competency frameworks at either the system or program level. At the system level, this includes individuals involved with creating a structure for and supporting implementation of competency frameworks in states, institutions of higher education, and/or professional organizations (for example, those who developed and/or adopted the competency framework, those who have a role in disseminating or monitoring the use of the framework, and those who have a role in creating or implementing professional development linked to the framework). At the program level, this includes program and/or center directors, program or center professional development coordinators and managers, center-based teachers and caregivers, and FCC providers.

The research team will use nonprobability, purposive sampling to select cases and identify potential respondents who can provide information on the study’s key constructs. Because we will purposively select participants, they will not represent the population of staff at the system or program level involved with implementing competency frameworks or assessing competencies.

Respondent Recruitment and Site Selection

The ITTCC study will use a purposive selection approach to recruit respondents and to identify up to seven cases, with each case focusing on one competency framework. Within cases, we will identify sites (that is, the specific geographic area[s] where the competency framework is used—for example, a state for the purpose of this study) and respondents (individuals from each site involved with implementing the framework at the system and program levels).

To select cases, sites, and respondents, we will apply a set of selection criteria that will lead to an appropriate level of variation in cases for meeting the study’s goals and answering the research questions. In this section, we describe the case and respondent selection and recruitment steps.

a. Step 1. Case exploration

Goal: Identify 10 cases (frameworks) from among a pool of 25 with the strongest potential for enabling us to answer the ITTCC study’s research questions based on a predetermined set of case selection criteria.

During the case exploration process, we will gather publicly available information for up to 25 cases (competency frameworks) on a set of selection criteria that will guide our purposive sampling process (detailed information on the case selection criteria is included in Appendix C) and on possible respondents. We will choose the 25 cases from those identified for the ITTCC’s scan of competency frameworks (one of the study’s foundational activities; Caronongan et al. 2019), and narrow to 10 cases based on the information collected.

Exhibit 1. Example case selection criteria

  • Adoption and implementation of competency framework (e.g., whether incentives and regulatory mechanisms such as licensures and certifications linked to frameworks)

  • Education and professional development on competencies (e.g., availability of professional development opportunities)

  • Assessment of competencies (e.g., nature and source of assessments)

  • Source and generation of the framework (e.g., whether developed for broad use or a specific site)

  • Characteristics of the framework (e.g., whether specifically for I/T or ECE teachers and caregivers more broadly)

  • Innovation of implementation or assessment approach (e.g., unique aspects of implementation)

  • Contextual influences within the site[s] associated with cases (e.g., availability of I/T jobs, wage rates, and other sector-specific employment indicators)

b. Step 2. Case and site selection

Goal: Following OMB approval, identify up to 7 cases (from among the 10) and the associated site[s] with the strongest potential for enabling us to answer the ITTCC study’s research questions.

To make a final decision about the cases to include in the ITTCC data collection, we will gather additional information on the case selection criteria following OMB approval. Specifically, we will reach out to one to four system-level respondents associated with each case (that is, each competency framework selected) to conduct a system-level screening (Instrument 1). For all cases, we expect to conduct a screening call with the lead developer(s) of the competency framework and/or the lead adopter(s) of competency framework (that is, someone involved with deciding to use the framework at the site, if different from the developer) identified during case exploration. If this screening call does not yield enough information, we will reach out to one to three additional individuals at the system level. For the purpose of estimating burden, we have estimated a total of 30 system-level respondents will complete the system-level screening (Instrument 1).

c. Step 3. Identifying system-level respondents

Goal: Identify system-level interview respondents involved with the range of activities in the study’s research questions.

For the system-level component of the study, we aim to identify respondents who are involved in the range of activities addressed in the study’s research questions (for example, competency framework developers or adopters, administrators for state and local quality improvement initiatives, administrators of licensing and/or credentialing agencies, higher education stakeholders, other training and technical assistance providers involved with pre-service and/or in-service activities, state-level oversight of federal programs) during the system-level screening (Instrument 1). The exact set of key respondents will vary by case, and the total number of respondents per case will depend on the number needed to address all research questions. We expect the average number of system-level respondents per case will be 8 to 11. For the purpose of estimating burden, we have estimated a total of 60 system-level respondents will complete the system-level semistructured interviews (Instrument 2). Additional details about recruitment can be found in section B4.

d. Step 4. Program identification

Goal: Identify early care and education programs using competencies and competency frameworks within the selected cases based on recommendations from system-level respondents.

Program-level sites will include both centers and FCC homes. To identify programs currently using a competency framework, we will consider information gathered during screening (Instrument 1), seek nominations when doing system-level semistructured interviews (Instrument 2), seek additional nominations from other individuals likely knowledgeable about program-level use of competency frameworks who are not formally being interviewed (Instrument 3); and conduct web searches. Program selection will aim to achieve variation on several criteria, for example:

  • Program type: Early Head Start programs, community-based centers, FCC homes

  • Program size (if center-based): small (fewer than 75 children) versus large

  • Part of larger entity or multisite versus independent

  • QRIS level (particularly if linked to the competency framework in a site)

The total number of program-level respondents per case will depend on the number needed to address all of the research questions. It is possible we will not conduct program-level semistructured interviews (Instrument 5) for all cases. We expect the average number of program-level respondents per case will be 8 to 11. For the purpose of estimating burden, we have estimated a total of 60 program-level respondents. Additional details about recruitment can be found in section B4.

B3. Design of Data Collection Instruments

Development of Data Collection Instruments

We developed five data collection protocols for the ITTCC study:

  • Screening and respondent identification protocols

    • A system-level screening protocol (Instrument 1) to inform selection of the final set of cases, and to support identifying system-level sites (if there is more than one site option) and respondents

    • A program-level screening protocol (Instrument 4) to inform selection of the final set of programs for each case and identifying respondents within those programs

    • A nominations for programs protocol (Instrument 3) to gather additional program-level respondent nominations from system-level staff (if needed)

  • Master semistructured interview protocols

    • A system-level master semistructured interview protocol (Instrument 2)

    • A program-level master semistructured interview protocol (Instrument 5)

a. Developing screening protocols to inform case, site, and respondent selection

We developed two screening protocols—one for the system level (Instrument 1) to inform case and site selection and one for the program level (Instrument 4) to inform program selection. Both protocols include items that align with the relevant selection criteria. To the extent possible, we will collect information from publicly available sources before conducting any screening calls and ask the screening respondents only for information that we cannot collect from public sources. Both protocols also include items to assess willingness to participate in the study, identify potential respondents, and request relevant documents. The nominations for programs protocol (Instrument 3) will describe the program-level selection criteria to respondents and ask for suggestions of program-level sites and respondents that might align with those criteria. These protocols begin with a consent statement detailing planned use of how data will be reported and inform respondents they can refuse to answer any question.

b. Developing the master semistructured interview protocols

We developed two master semistructured interview protocols—one for system-level respondents (Instrument 2) and one for program-level respondents (Instrument 5). To achieve the study’s objectives, we must answer the range of research questions in Exhibit 1 in Part A. The master semistructured interview protocols (Instruments 2 and 5) include different modules that capture the range of topics addressed in the study’s research questions. Within each module, interview questions align with the key constructs relevant to the research questions. Respondents will respond only to the subset of modules or questions that align with their own areas of knowledge. Exhibits 2 and 3 crosswalk the modules in each master semistructured interview protocol (Instruments 2 and 5) with the associated research question and respondent categories described in Section B2 under Respondent Recruitment and Site Selection. As the exhibits show, most sections of the protocols are potentially relevant to a number of the respondent categories. We will use the screening process to determine which protocol sections should be prioritized for each respondent based on the approach to implementation for that specific case.

Exhibit 2. System-level master semistructured interview protocol (Instrument 2) modules cross-walked to research questions and respondent types

Protocol module

Relevant research question(s)

Lead developer(s) of competency framework

Lead adopter(s) of competency framework

Administrators for state or local quality improvement initiatives

Administrators of licensing and/or credentialing agencies

Higher education stakeholders

Other training and technical assistance providers

State-level oversight of federal programs

I. Respondent background

NA








II. Development of framework and current implementation status

1. How have competency frameworks been developed?


X


X


X



III. Use of competencies for professional development

2. How have competency frameworks been implemented?

4. How do program/center directors and FCC owners use competency frameworks?

5. How do I/T teachers and caregivers use competency frameworks?



X

X

X

X

X

X

IV. Integration and alignment with ECE system

2. How have competency frameworks been implemented?


X

X

X

X

X

X

X

V. Evaluation and use of data about competency framework

2. How have competency frameworks been implemented?

3.How have competencies been assessed?

4. How do program/center directors and FCC owners use competency frameworks?

5. How do I/T teachers and caregivers use competency frameworks?



X

X

X

X

X

X

VI. Dissemination of information about competency framework

2. How have competency frameworks been implemented?

4. How do program/center directors and FCC owners use competency frameworks?

5. How do I/T teachers and caregivers use competency frameworks?


X

X

X

X

X

X

VII. Assessment of competencies

3.How have competencies been assessed?



X

X

X

X

X

X

VIII. Program and staff use of competencies

4. How do program/center directors and FCC owners use competency frameworks?

5. How do I/T teachers and caregivers use competency frameworks?

7. How can competencies help build the capacity of the I/T workforce and support quality improvement


X

X

X

X

X

X

IX. Supports and challenges for implementation

6. What are key lessons learned related to the implementation of competency frameworks and assessment of I/T teacher and caregiver competencies?


X

X

X

X

X

X

X





Exhibit 3. Program-level master semistructured interview protocol (Instrument 5) modules cross-walked to research questions and respondent types

Protocol module

Relevant research question(s)

Program and/or center directors

Program or center professional development coordinators or managers

Center-based teachers

Family child care providers

I. Respondent background

NA





II. Program characteristics

NA





III. Program use of competency framework for recruiting, hiring, promotions and compensation

4. How do program/center directors and FCC owners use competency frameworks?

7. How can competencies help build the capacity of the I/T workforce and support quality improvement

X

X

X

X

IV. Program use of competency framework for training and professional development of teachers/caregivers

4. How do program/center directors and FCC owners use competency frameworks?

6. What are key lessons learned related to the implementation of competency frameworks and assessment of I/T teacher and caregiver competencies?

7. How can competencies help build the capacity of the I/T workforce and support quality improvement

X

X

X

X

V. Teacher/caregiver use of competency framework

5. How do I/T teachers and caregivers use competency frameworks?

7. How can competencies help build the capacity of the I/T workforce and support quality improvement



X

X

VI. Program use of competency framework for curriculum selection

4. How do program/center directors and FCC owners use competency frameworks?


X

X


X

VII. Other aspects of program use of competency frameworks

2. How have competency frameworks been implemented?

3. How have competencies been assessed?

4. How do program/center directors and FCC owners use competency frameworks?


X

X

X

X

VIII. Dissemination of information about competency framework to programs

2. How have competency frameworks been implemented?

4. How do program/center directors and FCC owners use competency frameworks?

5. How do I/T teachers and caregivers use competency frameworks?

X

X

X

X

IX. Dissemination of information about competency framework to teachers/caregivers

2. How have competency frameworks been implemented?

4. How do program/center directors and FCC owners use competency frameworks?

5. How do I/T teachers and caregivers use competency frameworks?

X

X

X

X

X. Assessment of competencies by programs

2. How have competency frameworks been implemented?

3. How have competencies been assessed?


X

X

X

X

XI. Teacher/caregiver experience with assessments

3. How have competencies been assessed?




X

X

XII. Supports and challenges for implementation

6. What are key lessons learned related to the implementation of competency frameworks and assessment of I/T teacher and caregiver competencies?

7. How can competencies help build the capacity of the I/T workforce and support quality improvement

X

X

X

X



All questions in the master semistructured interview protocols (Instruments 2 and 5) are new, as this is a new area of research. Two expert advisors reviewed the system-level and program-level master semistructured interview protocols (Instruments 2 and 5); they assessed the clarity and appropriateness of the questions, given the study’s objectives. We conducted two pre-test interviews with program-level respondents to test the program-level master semistructured interview protocol (Instrument 5) and three pre-test interviews with system-level respondents to test the system-level master semistructured interview protocol (Instrument 2). Throughout each pre-test interview, at the end of each section of the protocol we asked a set of cognitive interview questions. These questions addressed whether the topics were relevant to the respondent, if our terminology was clear, and whether respondents’ understanding of the questions was consistent with what was intended. We also asked about the implications of COVID-19 for the implementation of the competency framework to determine how this might influence what the study will learn. These activities informed revisions to both protocols to ensure they fit within the allotted burden, were clear for respondents, and included only the key questions necessary for addressing the study’s research questions.

The master semistructured interview protocols (Instruments 2 and 5) begin with a consent statement detailing planned use of data and informing respondents they can refuse to answer any question. The interview protocols contain scripted questions but also include probes and prompts to use flexibly, depending on the nature of the responses and areas of knowledge of each respondent. As in the screening protocols, not every respondent will be asked every question in the semistructured interview protocols and they may be asked additional follow-up questions depending on the conversation.

B4. Collection of Data and Quality Control

Contracted project team members will screen, recruit, and interview all participants. We will assign a two-person case liaison team to each case, and aim to have that team perform all screening, recruiting, and interviewing for that particular case.

1. Training

To ensure we collect high quality data, we will train case liaisons on screening, recruiting, and interviewing at both the system and program levels. We will train case liaisons on four topics:

  1. System-level screening and recruitment to prepare liaisons for activities necessary to support case and site selection, identify potential interview respondents, and recruit system-level respondents

  2. System-level semistructured interviews – to prepare liaisons to implement the system-level master semistructured interview protocol (Instrument 2), including selection of modules appropriate for each interviewee and how to apply any lessons from a high-level document review before the interview

  3. Program-level identification, screening, and recruitment – to prepare liaisons for activities necessary to support identifying and selecting programs, to identify interview respondents, and to recruit program-level respondents

  4. Program-level semistructured interviews – to prepare liaisons to implement the program-level master semistructured interview protocol (Instrument 5), including selecting modules appropriate for each interviewee and how to apply any lessons from a high-level document review before the interview

Training will focus on strategies to ensure we collect high quality data in the least burdensome way to respondents including: (1) how to prepare for the interview (for example, narrowing to the selected modules, identifying where documents we have gathered have already provide the needed information) (2) how to efficiently move through the interview protocol while collecting high quality information (for example, how to make decisions about which probes are critical based on answers received to that point in the interview); and (3) how to synthesize notes after each interview to confirm completeness of the data. We plan to conduct a combined training on system- and program-level screening and recruitment, followed by another training on system- and program-level interviewing.

2. Screening and Recruitment

Because procedures for system- and program-level screening (Instruments 1 and 4), and recruitment are similar, we have integrated the discussion of these procedures here. However, we expect that system-level screening (Instrument 1) and recruitment will occur before program-level screening (Instrument 4) and recruitment. At both the system level and the program level, the screening protocol includes a recruitment module if the person we are speaking with is also a person who will participate in a full interview. The stand-alone recruitment protocol will be used for interview respondents who do not participate in the screening process.

Recruitment materials and protocol. Section B3 describes development of the screening protocols. To ensure buy-in, we have also prepared an engaging recruitment packet that includes an advance letter describing the study, endorsement letters from the Administration for Children & Families, an aesthetically pleasing study flyer, and an attractive frequently asked questions sheet that includes study details. We also developed a protocol to use during recruitment calls. We developed variations of the recruitment materials and protocol to ensure they are appropriate for both system- and program-level respondents. Recruitment materials are included in Appendix C.

The recruitment protocol will serve three purposes. First, it will ensure that recruiters clearly communicate the purpose of the study and are prepared to address questions or concerns of potential participants. Second, it will ensure that recruiters systematically collect information needed to make a final decision about the best respondents for answering the research questions in the least burdensome way for respondents. Third, if we have not yet identified all the respondents needed in a particular case, it will guide recruiters through requesting additional nominations of respondents.

Screening process for selecting cases. Trained liaisons will begin system-level screening (Instrument 1) immediately after receiving OMB’s approval. Program-level screening (Instrument 4) for a particular case will begin later because we will be gathering program nominations from system-level respondents (Instrument 3).

At the system and program levels, we will send recruitment materials to targeted screening respondents and follow up by phone. Upon reaching the respondent, case liaisons will implement the appropriate screening protocols (Instrument 1 or 4).

Recruitment process for study respondents. After we have selected cases (and, if necessary, sites) and programs for the study sample, we will move to formal recruitment of respondents. For interviewing respondents we spoke with as part of screening (Instruments 1 or 4), case liaisons will send an email confirming their selection for the study and then reach out by phone to implement the recruitment protocol. For respondents we have not previously contacted, case liaisons will send all recruitment materials and follow up by phone to implement the recruitment protocol. After we have recruited respondents and scheduled the interview, we will email the respondents with a list of the topics their interview will include.

3. Collecting Data

At the system and program levels, we will conduct document reviews and semistructured phone interviews to answer the study’s research questions. We will ask respondents to share by email documents related to implementing and assessing the competency framework before the interviews (we will request additional documents during interviews). For sharing documents with the study team, if the respondent prefers not to use email they will have a choice of using a secure file transfer site or a postage-paid envelope we provide. We will instruct respondents that documents should not include any personally identifiable information. We will use documents sent before interviews to tailor master protocols so that we ask only relevant questions.

At the system level, we will conduct either one-on-one or small-group interviews. Depending on the number of modules from the system-level master semistructured interview protocol (Instrument 2) that are appropriate for the respondent(s), interviews will last 60 to 90 minutes. For the purposes of estimating burden, we have assumed 90 minutes.

At the program level, we will conduct one-on-one or small-group interviews. We will conduct one-on-one interviews with FCC providers that will last about 60 minutes and teachers and caregivers in center-based settings that will last about 30 minutes.

For both the program- and system-level semistructured interviews (Instruments 2 and 5), one member of the case liaison team will conduct the interview and one will take notes. With the permission of respondents, we will also record the interviews. The case liaison team will confer after each call (using interview recordings as needed) to ensure completeness of data. Throughout the data collection period, liaisons and project leaders will conduct weekly meetings to share information and strategies, help troubleshoot challenges, and ensure that all data are collected uniformly.


B5. Response Rates and Potential Nonresponse Bias

Response Rates

The interviews are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion. Response rates will not be calculated or reported.


Non Response

Based on previous experience with similar methods and respondents, we do not expect substantial nonresponse on the interviews. Because participants will not be randomly sampled and findings are not intended to be representative, we will not calculate nonresponse bias. As part of study reporting, we will present information about characteristics of the participating agencies and programs.


B6. Production of Estimates and Projections

This study seeks to present internally valid descriptions of the implementation of competency frameworks and assessment of competencies in purposively selected sites, not to promote statistical generalization to other sites. We do not plan to make policy decisions based on data that are not representative, nor publish biased population estimates. Information reported will clearly state that results are not meant to be generalizable.


B7. Data Handling and Analysis

Data Handling

To ensure that interview notes are complete and consistently prepared, we will use a standard note template for each data collection protocol. The notetaker could make use of audio recordings (with permission of respondents) to ensure that notes are complete. Data collectors will review the interview notes and a senior member of the team will review a subset of the notes to ensure that data are complete and error free.



Data Analysis

Qualitative data from both the interviews and documents gathered will provide comprehensive and rich information to analyze factors associated with the practical implementation of competency frameworks and assessment of competencies. We will build a structure for organizing and coding the data across all the data collection sources to facilitate efficient analysis across respondents.

Coding field notes and documents. Using NVivo (or similar software), we will take an open coding approach to preparing the qualitative data for analysis. Open coding is an iterative process for analyzing and categorizing data, and these categories are confirmed, adapted, or otherwise further refined as additional data are added (Strauss and Corbin 1990). The constructs that guided development of the master interview codes will serve as the basis for the initial set of codes and a protocol for coding interview responses. The coding scheme will become more detailed and layered over the course of the data collection phases, but this core set of codes will remain consistent as the initial step in the coding process after each phase of data collection.

We will apply the same set of codes for the documents as for the interview notes. We will log all documents received from respondents and summarize the types of documents received across cases. We will use the documents before interviews to individualize the interview protocols for respondents. Following the interviews, we will return to the documents to supplement what we learned in the interviews, or to provide context for what we learned. We do not expect to systematically code all documents; we will prioritize documents that can directly inform the study’s research questions.

We will train a small team of coders. To ensure consistency in coding, the team will have a lead coder who develops the coding scheme, trains the coders, oversees the work, and ensures reliability. All coders will code the first set of notes and discuss any differences to establish coder agreement. The coding team will have ongoing meetings to discuss emerging themes from the data in response to the research questions. When new themes or concepts emerge, we will create new codes to apply to all notes.

Data Use

Summarizing data on specific research questions and concepts to generate findings. When the data are coded, the team will be able to retrieve and sort data linked to specific research questions and constructs. We will synthesize data pertaining to a specific research question across respondents or for specific types of respondents (such as all FCC providers). We will then share the findings in a series of materials, which will be available publicly:

  • A study report

  • A series of research briefs

  • Presentations or briefings



B8. Contact Person(s)

Jenessa Malin

Social Science Research Analyst

U. S. Department of Health and Human Services

Administration for Children and Families

Office of Planning, Research & Evaluation

Switzer Building

Washington DC, 20201

(202) 401-5560

[email protected]

Kathleen Dwyer

Senior Social Science Research Analyst

U.S. Department of Health and Human Services

Administration for Children and Families

Office of Planning, Research and Evaluation

Switzer Building

Washington DC, 20201

(202) 401-5600

[email protected]

Pia Caronongan

Mathematica

955 Massachusetts Ave, Suite 801

Cambridge, MA 02139

(617) 674-8368

[email protected]

Felicia Hurwitz

Mathematica

P.O. Box 2393
Princeton, NJ 08543-2393

(609) 945-3379

[email protected]





Attachments

Instruments:

Instrument 1: System-level screening protocol

Instrument 2: System-level master semistructured interview protocol

Instrument 3: Nominations for programs protocol

Instrument 4: Program-level screening protocol

Instrument 5: Program-level master semistructured interview protocol


Appendices:

Appendix A: Study Research Questions

Appendix B: Case Selection Criteria

Appendix C: Study Recruitment Materials



References

Caronongan, P., K. Niland, M. Manley, S. Atkins-Burnett, and E. Moiduddin. (2019). Competency Frameworks for Infant and Toddler Teachers and Caregivers. OPRE Report 2019-95. Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.

Strauss, A.L., and J.M. Corbin. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: Sage, 1990.

14

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMATHEMATICA
File Modified0000-00-00
File Created2021-03-12

© 2024 OMB.report | Privacy Policy