Supporting Statement B - Formative for Research GenIC - Child Care ECB

Child Care ECB Generic Supporting Statement B Draft3_clean2.docx

Formative Data Collections for ACF Research

Supporting Statement B - Formative for Research GenIC - Child Care ECB

OMB: 0970-0356

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes




Child Care Research and Evaluation Capacity Building Center Needs Assessment



Formative Data Collections for ACF Research



0970 - 0356





Supporting Statement

Part B

MARCH 2021


Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers: Meryl Barofsky and Alysia Blandon


Part B


B1. Objectives

Study Objectives

There are three key objectives of the study:

  1. Describe the research and evaluation capacities of CCDF State, Territory, and Tribal Lead Agencies and understand their strengths and challenges related to conducting, partnering to obtain, and using research and evaluation.

  2. Explore reasons for capacity challenges and perspectives on potential solutions.

  3. Develop targeted resources and supports to enhance the research and evaluation capacities of the CCDF State and Territory Lead Agencies.


Generalizability of Results

This study is intended to present internally valid description of the needs and capacities of CCDF Lead Agencies. The results of the study will only be generalizable to State and Territory CCDF Lead Agencies. We do not expect the results of the tribal study to be generalizable across the whole population of tribal CCDF Lead Agencies.

Appropriateness of Study Design and Methods for Planned Uses

This is a descriptive study designed to inform supports for CCDF Lead Agencies to build their research and evaluation capacity. The study design is appropriate for gathering data to assess the research and evaluation needs of CCDF Lead Agencies and inform development of resources to support CCDF Lead Agencies in improving research and evaluation capacities.

  • The survey census of state and territory agencies is needed so we can develop resources for universal capacity-building and strategies that would be most appropriate for intensive capacity building for a small number of agencies with lower research and evaluation capacities. There are 56 CCDF state and territory lead agencies.

  • Tribal Lead Agencies are subject to some of the same requirements as State and Territory Lead Agencies, but also have different implementation milestones and reporting requirements. To meet the goals of the study in an efficient manner, and without overburdening Tribal Agencies, particularly those with smaller funding allocations, we will select a sample of Tribal Lead Agencies to complete the survey. There are 257 CCDF Tribal grantees. We will select tribal agencies with different characteristics to learn more about the different needs of these diverse grantees.

  • The follow up focus groups will help explore information more deeply, including the associations among the constructs, individual perceptions about strengths and challenges, and possible strategies for building capacities. The focus groups will be particularly important for understanding reasons that survey data may suggest that some agencies have high capacities in some areas and low capacities in others. In addition, CCDF Administrators and their agencies operate in a complex environment of state policies and priorities (e.g. shifting governors’ priorities; administrators who are responsible for broader human services, workforce, or education programs, etc). The focus groups can help to better understand these contexts and how the contexts may influence the effectiveness of potential solutions for increasing research and evaluation capacity.

As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.  



B2. Methods and Design

Target Population

The target population is CCDF State, Territory, and Tribal Lead Agencies. For each of the lead agencies, we will collect information from the CCDF Administrator or the person(s) most knowledgeable about the topics in the survey.


Sampling

Survey

We will conduct a census of all 56 State and Territory CCDF Lead Agencies to obtain a comprehensive view of the research and evaluation capacity building needs these agencies have. We are conducting a census of State and Territory CCDF Lead Agencies in order to learn about the full range of agencies’ needs. Prior work with research grantees and conversations with stakeholders suggests that a sample of CCDF Lead Agencies would not provide sufficient information on the range of research and evaluation capacities because so little is known about their research and evaluation capacity. We will also administer the survey to a sample of 15 Tribal Lead Agencies. We will select a sample of Tribal agencies based on size of funding allocation and regional density (described below). We will randomly select tribes from each stratum, selecting multiple potential tribes in each stratum in case a tribe that has been contacted is unable to participate to obtain 15 participating tribal agencies.


Funding allocation - CCDF requirements for Tribal agencies are based on the size of their CCDF allocation per year. Funding allocation sizes are characterized as small (less than $250,000), medium ($250,000 to $1 million), and large (more than $1 million). It is important to include grantees in all three categories as their needs and experiences are likely to vary. Based on 2016 funding allocations, 13 percent of grantees have large allocations, 28 percent have medium allocations, and 59 percent have small allocations (Table 1).We propose to sample an equal number of large, medium, and small allocation grantees to balance the need to learn about large allocation grantees, who may have particular needs due to the larger numbers of children and families they serve, and learn about small allocation grantees that have to meet fewer requirements but represent a larger share of grantees overall (Table 2).


Regional density – The density of Tribes varies in different parts of the country. ACF has 10 regional offices working closely with CCDF grantees. However, more than two-thirds of all Tribes are clustered in regions VI, VIII, and X (Table 1). To ensure we account for both geographic variation and uneven distributions across regions, we will sample based on regional density of Tribes, ranging from the most dense (regions with 20% or more of Tribal grantees), medium density (regions with more than 10% but less than 20% of Tribal grantees), least dense (regions with fewer than 5% of Tribal grantees) (Table 2). We will select 3 Tribal agencies from each of the densest regions VI, IX, and X, each of which has more than 50 Tribes (68% of all Tribal grantees are in one of these 3 regions). Within region X, we will select at least 1 Tribal grantee from Alaska to make sure we capture the needs and experiences of agencies serving Alaska Natives. We will select 3 additional Tribes from either region V or VIII, each of which has more than 25 Tribes (22% of all Tribal grantees are in one of these 2 regions). Finally, we will select the final 3 Tribal agencies from either region I, II, IV, or VII. These 4 regions are the least dense and include only 10% of all Tribal grantees).


Exhibit B1. Tribal grantees by region and size of funding allocation

ACF Regionb

Size of 2016 funding allocationa

Row total

Percent of Tribal grantees

large

medium

Small

I


1

8

9

4%

II


2

1

3

1%

IV


3

2

5

2%

V


7

23

30

12%

VI

18

14

24

56

22%

VII


1

6

7

3%

VIII

3

18

6

27

11%

IX

6

11

35

52

20%

X

6

15

47

68

26%

Column total

33

72

152

257


Percent of Tribal grantees

13%

28%

59%



Notes:

a Funding allocation sizes are characterized as: small (less than $250,000), medium ($250,000 to $1 million), and large (more than $1 million).

b There are no tribal grantees in Region III.



Exhibit B2. Proposed tribal sampling approach for needs assessment survey

Regional densityb

Regionc

Size of 2016 funding allocationa

Row total

large

medium

small

Most dense

VI

1

1

1

3


IX

1

1

1

3


Xd

1

1

1

3

Medium density

V or VIII

1

1

1

3

Least dense

I, II, IV, VII

1

1

1

3

Column total


5

5

5

15

Notes:

a Funding allocation sizes are characterized as: small (less than $250,000), medium ($250,000 to $1 million), and large (more than $1 million).

b Regional density is characterized as: most dense (regions with 20% or more of Tribal grantees), medium density (regions with more than 10% but less than 20% of Tribal grantees), least dense (regions with fewer than 5% of Tribal grantees)

c There are no tribal grantees in Region III. d We propose to select at least 1 Tribal grantee from Alaska in Region 10.





Virtual Focus Groups

Five virtual focus groups will be held with staff from 20 agencies who participated in the survey. We expect each focus group to have about nine participants, with typically more than one participant from each agency. One focus group will consist of staff from Tribal Agencies only. The other four focus groups will include a mix of State and Territory staff. We will invite agencies to maximize variation in: geography (ACF Region), CCDF Lead Agency department (e.g. education, human services, labor), agencies seeking and/or receiving external funds to support research, and types of challenges faced in doing and using research. For the four state/territory focus groups, we plan to group agencies with similar levels of research capacity because it will be easier for them to relate to each other. This will create a more meaningful focus group discussion. We will invite CCDF Lead Administrators, and in agencies that have them, the individuals who specialize in doing or using research and analysis.



B3. Design of Data Collection Instruments

Development of Data Collection Instruments

To prepare for this data collection, we reviewed studies of data and research capacities of organizations.1 We combined what we learned from those studies into a conceptual framework of research and evaluation capacity constructs, and then developed sub-research questions from those constructs. We considered which of those sub-research questions could best be answered through survey questions and which would best be answered through focus group questions.


Survey

Survey questions focus on gaining an understanding of research and evaluation capacities of the agencies. We designed the survey instrument to follow best practices in eliciting accurate self-reports, including selecting or designing measures that ask for concrete examples of past performance to support opinions about past research activities and providing benchmarks and other comparative methods when assessing technical skills. Many of the survey questions were adapted from existing instruments, some of which have been tested for assessing capacities around research use and facilitating factors as referenced in footnote 1.

We pretested the instrument with four former State and Tribal agency administrators. This was done using electronic hardcopy versions of the instrument. We conducted a debrief with each former administrator to (1) ensure that questions were easy to understand and that the survey used language familiar to respondents; (2) identify potential problems with incomplete or inappropriate response categories; (3) measure the response burden; and (4) confirm there were no unforeseen difficulties in administering the instrument. Pretest respondents thought the majority of the questions were relevant, clear, and used appropriate language. Respondents took longer than the targeted time to complete the survey. Based on the input of pretest respondents, we reduced the length of the survey by cutting items that overlap with other items on the survey, and items that capture less actionable information. We also dropped items that were unlikely to elicit substantial variation in the way administrators answered questions, based on pretest responses.


Virtual Focus Groups

Focus group design includes designing the questions, but also considering the composition of the group of individuals and the setting in which the groups will occur. Focus groups benefit from having some key experiences that everyone in the group can relate to, while having some diversity that helps the group respond to each other’s experiences in reflecting on their own. Focus groups are best for questions that focus on “how” or “why” or that otherwise need some explanation or contextualization to more fully understand.

The virtual focus group protocol and questions were developed by a team member that specializes in this type of instrument development. We had been planning for a virtual environment even before the COVID-19 pandemic to maximize agency participation. As agency staff, these individuals are used meeting virtually, so we do not expect that to impede participation or sharing.

The focus group questions were developed to provide greater nuance and depth to those asked during the survey. We did not perform any separate testing of the focus group questions, but we did reflect on what was learned from the survey pretest to assure the language of the focus group questions was appropriate. The number of questions and the time period chosen for asking the questions is based on professional experience in conducting focus groups.



B4. Collection of Data and Quality Control

Mathematica has primary responsibility for data collection activities related to the survey. Urban Institute has primary responsibility for data collection activities related to the focus groups. As the prime contractor, however, Urban Institute will engage in quality assurance activities to oversee Mathematica’s activities. Urban Institute and Mathematica will work together collaboratively to assure high quality data collection including training of recruitment, data collection, and data analysis staff and instituting data security checks.


Training


General Recruitment and Data Collection Training

All project staff involved in recruitment, data collection, or data handling will receive training on data security and on protecting the rights and welfare of human subjects. Staff will also receive training, and they will be provided with contact information of team leaders to direct potential participants with questions or concerns about participation. Staff will receive information to help them assure the quality and integrity of the process. Staff leading recruitment and data collection are seasoned researchers with formal education, training, and experience in engaging in these activities. They will lead most of the training activities to align with expectations developed for this project. The recruitment and data collection team will have an initial hour-long kickoff training on the purpose of the study, the goals of recruitment and data collection, and overall approach. The team will then meet weekly throughout the recruitment and data collection period to monitor status and discuss any issues that come up.


Tribal Recruitment and Sensitivity Training

Tribal communities are tremendously diverse and represent a wealth of languages, world views, teachings, and experiences. To work respectfully within Tribal communities, it is important to understand this diversity; how it shapes our work; and the roles, responsibilities, and perspectives of the people with whom we will work. Staff who will be reaching out to or interacting with Tribal Agencies will undergo cultural humility and cross-cultural understanding training. The information in the training prepares staff to build rapport and work respectfully with Tribal communities. The training has two important and unifying goals:


  1. To develop a perspective of cultural humility. Cultural humility emphasizes that American Indian/Alaska Native (AIAN)2 people are experts on their cultures. It further highlights the fact that an appreciation of culture extends beyond what can be accomplished by training in cultural competence alone. It is an ongoing process that involves self-reflection and self-critique. In the process, we not only learn about AIAN communities and culture, we also critically examine our own beliefs, identities, and cultures. In doing so, we seek to break down the barriers and bias that can get in the way of good working relationships.

  2. To build cross cultural understanding. Cross-cultural understanding provides an awareness of how culture informs the way we perceive and interact with one another in the context of our social worlds. By developing cross-cultural understanding, we will begin to recognize preconceptions about Native people and their perceptions of study team members. This process should enhance staff awareness of how cultural differences influence our perception and understanding of one another.


Training will be facilitated by Mathematica staff with a deep knowledge and experience working with AIAN communities. Trainees will receive a training manual and take part in a half-day interactive training presentation prior to reaching out to any AIAN community.


Survey Recruitment and Data Collection


State and Territory Survey Recruitment

All State and Territory Lead agencies, and sampled Tribal agencies, will receive an advance letter (Attachment A1) that will also include a study flyer with more information about study activities (Attachment A2) and a letter of support from the Office of Child Care (Attachment A3). We will send the advance letter to staff identified as the CCDF administrator in each state’s CCDF plan. The advance letter will explain the purpose of the survey and request confirmation of the appropriate contacts to complete the survey on behalf of the agency. We will aim to recruit the CCDF Lead Administrator to complete the survey on behalf of the lead agency and will ask this individual to nominate additional points of contact if the Lead Administrator cannot or does not feel they are the most appropriate person to answer the questions. Administrators and additional points of contact will be recruited to complete the survey via email.

Tribal Survey Recruitment

Mathematica staff (trained specifically on working with AIAN communities) will contact the selected tribal administrator, using a recruitment call script (Attachment A4), in advance of being formally invited to participate in the study. This call will be used to:

  • Notify the tribal agency that they’ve been selected to participate in the study

  • Introduce the study and provide a high-level summary of its purpose

  • Identify local approval protocols for participating in research

  • Ascertain the appropriate tribal leadership to include on the formal invitation to participate

  • Establish rapport in advance of any future conversations needed to obtain tribal approval

  • Address any concerns about participating in research

  • Plan for any additional steps and provide a timeline for the forthcoming formal invitation letter

Mathematica staff will work with tribal administrators and associated tribal leaders in each community to obtain tribal approval for the study to be carried out. Mathematica staff will be available to assist throughout the recruitment and tribal approval process. If needed, they will present virtually or in person to tribal approval bodies and will assist in the composition of any needed tribal approval documentation including but not limited to tribal resolutions and submissions to tribal IRBs. After the tribal agency agrees to participate in the study we will send an advance email (Attachment A5) with the study flyer (Attachment A2) and letter of support (Attachment A3)


Survey Data Collection

We will send an invitation email (Attachment A4) to the designated point of contact at each agency as identified through the recruiting process. The email will contain a password protected link to the web survey (Instrument 1). Potential respondents will be contacted throughout the fielding period (by phone and email) to encourage completion. Potential respondents who are unable or unwilling to complete the web survey will be offered the option to complete and return a paper version of the survey instead. Upon completion of the survey, points of contact for a participating agency will be able to receive a copy of their submitted responses. We will reopen the survey if requested or make edits to address any errors that agency staff note in the completed responses.

Focus Group Recruitment and Data Collection


Focus Group Recruitment

Following preliminary analysis of the survey data, a set of state, territory, and tribal agencies will be selected for focus group recruitment. When agencies are originally recruited for the survey, they will be made aware that they may also be contacted in the future for focus group participation. We will send invitation letters via email (Attachment A9) and follow up via an additional email (Attachment A10) and telephone calls (Attachment A11) to secure participation and arrange focus group dates and times. To the extent possible, focus group dates and times will be arranged to correspond with invited participant schedules including consideration of varied time zones. Once individuals have agreed to participate, we will send a registration email (Attachment A12), and then a final reminder email just prior to the day of the focus group (Attachment A13).

Focus Group Data Collection

Preliminary survey analyses will help to identify CCDF Lead Agencies and staff positions in those agencies for focus group recruitment. Individuals identified in the recruitment process will receive individualized email links in a focus group registration email (Attachment 12) which will allow participation in the virtual focus groups (Instrument 2). Focus group discussions will be conducted using the Zoom platform.



B5. Response Rates and Potential Nonresponse Bias

Response Rates

We have assumed a minimum 80 percent response rate for the survey. This estimate is based on previous studies (summarized in Exhibit B3) that conducted surveys or interviews of CCDF administrators or related groups for states and territories over the last 8 years. Data collection periods are listed as 2-5 months (with 4 months being most typical). Response rates for these studies ranged from 70%-100%.

Exhibit B3. Previous CCDF or Related Administrator Surveys

Study

Population

Mode

Months Fielded

Response

Banghart et al. 2019

States, DC, Territories

Online survey

May-Aug (4 mos.)

93% (all states, DC, Guam)

Connors-Tadros and DeCrecchio, 2019

States, DC, Territories

Email survey & focus groups

May-Sept (5 mos.)

82% (44 states, DC, Guam)

King et al. 2018

States, DC

Online survey

April-June (3 mos.)

98% (DC declined)

Maxwell et al. 2015

States, DC, Territories

Phone survey

Aug-Nov (4 mos.)

75% (not specified)

The Early Childhood Data Collaborative, 2014

States and DC

Online survey

July-Oct (4 mos.)

100%

Derrick-Mills 2012

States only

Online survey

Sept-Oct (2 mos.)

70%



The surveys of CCDF Administrators that we found, did not include Tribal CCDF Agencies. Other information we have about collecting data from Tribal entities suggest that surveying the tribes is likely to take longer than surveying the states and territories primarily because they may require additional steps before collecting the data. Thus, we expect to initiate data collection for both at the same time, but for the tribal data collection to continue over a longer period.

The focus groups are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion. Response rates will not be calculated or reported for the focus groups.

We will consult with stakeholders to assure that recruitment and data collection for this study does not overlap substantially with periods when CCDF plans or other time-intensive submissions are due from states. In addition, we will seek help from the Office of Child Care to advertise the Needs Assessment and make CCDF Lead Agencies aware of the study prior to beginning recruitment. If our data collection efforts align with the annual State and Territory CCDF Administrators Meeting, we will also advertise the study there.


NonResponse

As participants will not be randomly sampled and findings are not intended to be representative, non-response bias will not be calculated. Respondent demographics will be documented and reported in written materials associated with the data collection.



B6. Production of Estimates and Projections

This study seeks to present internally valid descriptions of the needs and capacity of CCDF Lead Agencies. The data will not be used to generate population estimates, either for internal use or dissemination.



B7. Data Handling and Analysis

Data Handling

Confirmit allows automatic logic checks to be built into the design of surveys, including mandatory response fields, prespecified valid value ranges and prespecified patterns of logical and illogical responses across items. We will use these checks to reduce item missingness, require that responses fall within possible ranges and request confirmation from respondents when reported values are possible but implausible.

Since most survey responses will be completed online, there is minimal potential for data entry errors. Submitted paper-copy surveys, if any, will be hand-entered using Confirmit web instrument and will therefore ultimately be subject to the same checks as surveys completed online. During the data-cleaning stage, open-ended responses will be reviewed and coded by an analyst, with input and review by the task lead.


Data Analysis

We will summarize survey findings using simple descriptive statistics (such as means, frequencies, and percentages). Focus group data will be coded by a single, senior member of the team who specializes in qualitative data coding. We will develop a coding structure related to different aspects of research and evaluation capacity. NVivo will be used to structure the coding process and make it easier for team members to review the coding decisions. NVivo will be programmed with etic codes derived from the constructs allowing for emic (or emergent) codes to evolve during the coding process.

The final report will be based on a mixed-methods analysis summarizing study research questions, methodology, findings, and implications for the universal capacity-building, intensive capacity-building, and special activities tasks. Based on descriptive statistics and examination of the distribution of responses, we will explore whether there are natural groupings of agencies. For example, there may be a group of agencies that have lower capacity across all capacity constructs, another group of agencies may have higher capacity on some constructs but lower capacity on others, and yet another group of agencies may demonstrate high capacity across all constructs. By categorizing capacities by construct, we can use the information to address areas of targeted needs when providing universal capacity-building support, arranging productive peer support, and in formulating an approach to reach low-capacity agencies for intensive support.


Data Use

The primary purpose of the data will be to help the Center develop appropriate and targeted resources to support Agencies build their research and evaluation capacity (targeting the substantive or methodological focus of webinars and briefs). The data will also be used to select agencies that might benefit from more intensive support.


We do, however, expect the data to be available to staff beyond the research team such as OPRE. The contractor will prepare a data codebook to facilitate future use of the data by ACF and their internal partners.


The findings from this descriptive study are meant to inform ACF activities and, while the primary purpose is not for publication, some findings may be incorporated into documents or presentations that are made public. The following are some examples of ways in which we may share information resulting from these data collections: research design documents or reports; research or technical assistance plans; background materials for technical workgroups; concept maps, process maps, or conceptual frameworks; contextualization of research findings from a follow-up data collection that has full PRA approval; or informational reports to TA providers. In sharing findings, we will describe the study methods and limitations with regard to generalizability and as a basis for policy.



B8. Contact Person(s)

Meryl Barofsky, Office of Planning, Research, and Evaluation, [email protected]

Alysia Blandon, Office of Planning, Research, and Evaluation, [email protected]

Teresa Derrick-Mills, Urban Institute, [email protected]

Pia Caronongan, Mathematica, [email protected]

Jennifer Herard-Tsiagbey, Mathematica, [email protected]

Mike Cavanaugh, Mathematica, [email protected]


Attachments

Attachment A: Recruitment Materials

  • A1. Advance email for State and Territory CCDF Lead Agencies

  • A2. Study Flyer: Seeking Your Help to Assess CCDF Lead Agency Research and Evaluation Capacity Building Needs

  • A3. Office of Child Care Letter of Support

  • A4. Recruitment call script for Tribal CCDF Lead Agencies

  • A5. Advance email for Tribal CCDF Lead Agencies

  • A6. Survey invitation email

  • A7. Survey reminder email

  • A8. Survey phone follow-up scripts

  • A9. Focus group invitation email

  • A10. Focus group follow-up email

  • A11. Focus group follow-up phone script

  • A12. Focus group registration email

  • A13. Focus groups reminder email



Instrument 1: Needs Assessment Survey

Instrument 2: Focus Group protocol



References

Banghart, Patti, C. King, E. Bedrick, A. Hirilall, and S. Daily. 2019. State Priorities for Child Care and Development Block Grant Funding Increase: 2019 National Overview. Child Trends

Bourgeois, I. and J. B. Cousins. 2013. “Understanding Dimensions of Organizational Evaluation Capacity,” American Journal of Evaluation 34 (3): 299–319.

Brennan, Sue E., J. McKenzie, T. Turner, S. Redman, S. Makkar, A. Williamson, A. Haynes, and S. Green. 2017. Development and validation of SEER (Seeking, Engaging with and Evaluating Research): a measure of policymakers' capacity to engage with and use research. Health Research Policy and Systems, 15(1), 1-19. (also referenced as SEER)

Connors-Tadros, L. and N. DeCrecchio. 2019. The Views of State Early Childhood Education Agency Staff on Their Work and Their Vision for Young Children: Informing a Legacy for Young Children by 2030. New Brunswick, NJ: Center on Enhancing Early Learning

Derrick-Mills, T. M. 2012. “How Do Performance Data Inform Design and Management of Child Care Development Fund (CCDF) Programs in the U.S. States?” PhD Dissertation, the George Washington University. Ann Arbor, MI: UMI/ProQuest.

The Early Childhood Data Collaborative. 2014. 2013 State of States’ Early Childhood Data Systems. The Early Childhood Data Collaborative.

James Bell Associates. 2018. How Can Child Welfare Organizational Capacity be Measured? Evaluation Brief. Washington, DC: Department of Health and Human Services, Administration for Children, Youth and Families, Children’s Bureau.

King, Carlise, V. Perkins, C. Nugent and E. Jordan. 2018. “2018 State of State Early Childhood Data Systems.” The Early Childhood Data Collaborative.

Maxwell, Kelly, S. Moodle, C. King, V. Lin, and A. Blasberg. 2015. Child Care Administrators' Use of Administrative Data to Address Program and Policy Questions. Internal Use Only OPRE Report: CCADAC and CCEEPRA.

Palinkas, Lawrence A., A. Garcia, G. Aarons, M. Finno-Velasquez, I. Holloway, T. Mackie, L. Leslie, and P. Chamberlain. 2016. Measuring Use of Research Evidence: The Structured Interview for Evidence Use. Res Soc Work Pract, 26(5), 550-564. (also referenced as SIEU)

Penuel, W.R., D.C. Briggs, K.L. Davidson, C. Herlihy, D. Sherer, H.C. Hill, C.C. Farrell, & A-R Allen. 2016. Findings from a national survey of research use among school and district leaders (Technical Report No. 1). Boulder, CO: National Center for Research in Policy and Practice.

Pennsylvania Office of Child Development and Early Learning. 2018. Using Data to Inform Decision-Making: Staff Perspectives and Experiences (Instrument). (also referenced as PA)

Rohacek, M. 2017. Research and Evaluation Capacity: Self-Assessment Tool and Discussion Guide for CCDF Lead Agencies. OPRE Report #2017-63. Washington, DC: US Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research and Evaluation.

Yanovitzky, Itzhak and C. Blitz. 2017. The Capacity-Opportunity-Motivation (COM) Model of Data-Informed Decision-Making in Education. Proceedings of EDULEARN17 Conference 3rd-5th July 2017, Barcelona, Spain. (also referenced as COM)



1 Bourgeois, I. and J. B. Cousins. 2013. “Understanding Dimensions of Organizational Evaluation Capacity,” American Journal of Evaluation 34 (3): 299–319; Brennan, Sue E., J. McKenzie, T. Turner, S. Redman, S. Makkar, A. Williamson, A. Haynes, and S. Green. 2017. Development and validation of SEER (Seeking, Engaging with and Evaluating Research): a measure of policymakers' capacity to engage with and use research. Health Research Policy and Systems, 15(1), 1-19; Derrick-Mills, T. M. 2012. “How Do Performance Data Inform Design and Management of Child Care Development Fund (CCDF) Programs in the U.S. States?” PhD Dissertation, the George Washington University. Ann Arbor, MI: UMI/ProQuest; James Bell Associates. 2018. How Can Child Welfare Organizational Capacity be Measured? Evaluation Brief. Washington, DC: Department of Health and Human Services, Administration for Children, Youth and Families, Children’s Bureau; Palinkas, Lawrence A., A. Garcia, G. Aarons, M. Finno-Velasquez, I. Holloway, T. Mackie, L. Leslie, and P. Chamberlain. 2016. Measuring Use of Research Evidence: The Structured Interview for Evidence Use. Res Soc Work Pract, 26(5), 550-564; Penuel, W.R., D.C. Briggs, K.L. Davidson, C. Herlihy, D. Sherer, H.C. Hill, C.C. Farrell, & A-R Allen. 2016. Findings from a national survey of research use among school and district leaders (Technical Report No. 1). Boulder, CO: National Center for Research in Policy and Practice; Pennsylvania Office of Child Development and Early Learning. 2018. Using Data to Inform Decision-Making: Staff Perspectives and Experiences (Instrument); Rohacek, M. 2017. Research and Evaluation Capacity: Self-Assessment Tool and Discussion Guide for CCDF Lead Agencies. OPRE Report #2017-63. Washington, DC: US Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research and Evaluation; Yanovitzky, Itzhak and C. Blitz. 2017. The Capacity-Opportunity-Motivation (COM) Model of Data-Informed Decision-Making in Education. Proceedings of EDULEARN17 Conference 3rd-5th July 2017, Barcelona, Spain.

2 We use Tribe and Tribal to refer to the CCDF Lead Agencies using the conventions of OCC. However, in referring to the people, we use American Indian/Alaska Native recognizing that tribes/tribal are terms typically developed outside of these communities to refer to them. Training about these communities will include understanding these types of nuances and creating a respectful language for interactions.

12


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAYB
File Modified0000-00-00
File Created2021-04-22

© 2024 OMB.report | Privacy Policy