Att_1850-0823 v3 rev OMB Case Studies Part A

Att_1850-0823 v3 rev OMB Case Studies Part A.doc

National Evaluation of the Comprehensive Technical Assistance Centers

OMB: 1850-0823

Document [doc]
Download: doc | pdf

National Evaluation of the Comprehensive Technical Assistance Centers



Statement for Paperwork Reduction Act Submission


PART A: Justification



Contract ED-04-CO-0028







April 3, 2009





Prepared for

Institute of Education Sciences

U.S. Department of Education


Prepared by

Branch Associates, Inc.

Policy Studies Associates, Inc.

Decision Information Resources, Inc.


Contents




Part A: Justification

This is the third of three clearance requests submitted to OMB for the National Evaluation of the Comprehensive Technical Assistance Centers (“Centers”). This submission is necessitated because the National Center for Education Evaluation and Regional Assistance (NCEE), a division of the Institute for Education Sciences, U.S. Department of Education (ED), exercised an Option within the Base Contract in 2008 to conduct Case Studies of Comprehensive Center Technical Assistance. The Case Studies will focus on the extent to which such assistance has resulted in enhanced State Education Agency (SEA) capacity to implement key No Child Left Behind Act (NCLB) provisions. ED and the study team believe that conducting the work described herein will complement the ongoing evaluation objectives being carried out in the Base Contract.

In addition, the case studies with selected SEAs present a timely opportunity for ED to learn about the issues that the passage of the American Reinvestment and Recovery Act (ARRA) may pose for states; the strategic choices those states may have to make; and the kinds of assistance they might require in order to effectively use the stimulus funds. Thus, the study team will strategically use this opportunity to gather information on the implications of ARRA for the states.

This third OMB submission therefore requests approval for the following Case Study protocols:


  • Regional Comprehensive Center (RCC) omnibus case study interview protocol


  • Comprehensive Content Center (CCC) omnibus case study interview protocol


  • State Education Agency (SEA) omnibus case study interview protocol


  • SEA ARRA protocol



OMB has already approved the following data collection instruments, which are described in the evaluation’s first OMB submission (OMB No. 1850-0823, approved by OMB on April 17, 2007):

    • Regional and Content Center staff interview protocols and procedures for conducting site visits to the Comprehensive Centers

    • Project Inventory Form and procedures for selecting a sample of Comprehensive Center projects to be rated by expert review panels for technical quality

    • Request for materials for expert panel review


OMB also already approved the following data collection instruments described in the evaluation’s second OMB submission (OMB No. 1850-0823, approved by OMB on December 27, 2007):

      • Survey of State-Level Project Participants

      • Survey of Regional Center Project Participants

      • Survey of Senior State Managers

A.1 Explanation of Circumstances That Make Collection of Data Necessary

Program Background

The 21 Comprehensive Centers provide support for the implementation of NCLB. The Educational Technical Assistance Act of 2002 authorized the Comprehensive Centers to provide technical assistance with NCLB implementation and the improvement of academic achievement. It gave discretion to ED to determine the priorities of the Centers (Sec. 207 of the Act). Using this authorization, ED designed the system of Centers now operating under cooperative agreements. First, ED charged the Centers with serving states as their primary focus because of the states’ pivotal role in supporting district and school implementation of NCLB. Second, ED established a two-tiered system of 16 Regional Comprehensive Centers (RCCs) and 5 Comprehensive Content Centers (CCCs) (Federal Register, June 3, 2005, p. 32583).

The 16 RCCs were designed to provide NCLB-related services to states in their geographic regions. Some serve only one state; others serve as many as five or six. They are expected to work closely with each state, assessing needs and assisting in many tasks, such as building statewide systems of support for districts and schools. The RCCs are expected to use the CCCs as a major source of content expertise.

Under the terms of their cooperative agreements with ED, the 5 CCCs were charged with providing research-based information, products, guidance, and knowledge on the following topics: (1) assessment and accountability; (2) instruction; (3) teacher quality; (4) innovation and improvement; and (5) high schools. The CCCs are expected to work closely with the RCCs and to develop products for them to use with states. The CCCs may also work directly with state-level staff. The CCCs must identify and translate research knowledge, communicating it “in ways that are highly relevant and highly useful” for state and local policy and practice (Federal Register, June 3, 2005, p. 32586). Annual funding for the Content and the Regional Centers totals over $59 million dollars per year.


Overview of the Evaluation


The National Evaluation of the Comprehensive Centers is Congressionally mandated under Title II of the Education Technical Assistance Act of 2002 (Section 204), Public Law U.S. 107-279. Title II requires that NCEE provides for ongoing independent evaluation of the Comprehensive Centers.


The evaluation of the current Comprehensive Centers began in September 2006. NCEE designed this evaluation both to meet the requirements of the legislative mandate (described below) and to support program improvement. On an annual basis for each of three years, the evaluation collects information about each Center’s annual work plans, the products and services they provide, and their collaboration with the States, each other, and other ED-supported technical assistance providers. In addition, independent experts review the quality of each Center’s services and products, and clients complete surveys about their experience with the same set of services and products, also on an annual basis.


The statute established the following specific goals for the evaluation:


  • Analyze the services provided by the Centers

  • Determine the extent to which each of the Centers meets the objectives of its respective plan

  • Determine whether the services offered by each Center meet the educational needs of SEAs



Through implementation of the data collection efforts described in the evaluation’s previous OMB submissions, the evaluation has been able to collect information that will be used to describe the goals, structure and operations of the 21 Comprehensive Centers, assess the quality, relevance and usefulness of Centers’ work, and describe the extent to which this work addresses states’ priorities for technical assistance. The evaluation has also collected a limited amount of descriptive information related to the expansion of states’ capacity to implement key provisions of NCLB. The proposed data collection seeks systematically to gather more information regarding expansion of states’ capacity through the work of the Comprehensive Centers.


The evaluation, as designed by NCEE, included an optional task to conduct Case Studies to identify lessons learned through the experiences of the Comprehensive Centers and to provide examples of expanded state capacity to meet SEA requirements under NCLB. The data collected through the proposed case studies will provide in-depth descriptive information related to the nature of capacity-building initiatives undertaken by the Comprehensive Centers, how RCCs and CCCs work together to build the capacity of state staff, and specific examples of how capacity has or has not been expanded from the perspective of key SEA officials.


Exhibit 1 summarizes the data collection activities planned for the evaluation. The first three instruments are those for which OMB approval is being sought through this request. The other instruments are displayed for information purposes only as they were approved by OMB in earlier submissions.























Exhibit 1: Data Collection Plan

Instrument

Respondent

Spring - Summer 2007

Fall 2007

Spring – Summer 2008

Fall 2008

Spring – Summer 2009

Fall 2009

Key Data

RCC omnibus case study interview protocol





CCC omnibus case study interview protocol







SEA omnibus case study interview protocol





SEA ARRA protocol

RCC staff









Content Center staff









SEA staff









SEA staff





X











X















X











X





Descriptive information on capacity-building initiatives, the nature of interactions between state and Center staff, examples of expanded capacity

Descriptive information on capacity-building initiatives, the nature of interactions between state and Center staff, the nature of interactions between Centers, examples of expanded capacity

Descriptive information on capacity-building initiatives, the nature of interactions between state and Center staff, the nature of interactions between Centers, examples of expanded capacity



Descriptive information on implications of ARRA for states’ education agencies

RCC Staff Site Visit Interview Protocol*

RCC Directors

X






Descriptive information about centers’ goals, structure and operations.

CCC Staff Site Visit Interview Protocol*

CCC Directors

X






Descriptive information about centers’ goals, structure and operations.










Project Inventory Form*

RCC and CCC Directors

X


X


X


A list of the products and services provided by each Center.










Request for Materials for Expert Review Panel*

RCC and CCC Directors

X


X


X


Documents and artifacts associated with projects selected for rating by review panel.










Survey of State-Level Project Participants*

SEA staff, intermediate education agency staff


X


X


X

Ratings of the relevance and usefulness of Center projects; data on capacity to meet the goals of NCLB










Survey of RCC-Level Project Participants*

RCC staff


X


X


X

Ratings of the relevance and usefulness of Center projects; data on capacity to meet the goals of NCLB










Survey of Senior State Managers*

Senior SEA managers


X


X


X

Description of state needs, data on other sources of assistance, ratings of the usefulness of other sources of assistance, ratings of the relevance and usefulness of Center projects, data on SEA capacity

*These data collection instruments were previously submitted and approved. The OMB number is 1850-0823.


A sample of 10 cases will be identified for this portion of the evaluation. We define a case as an SEA and its corresponding RCC as well as relevant CCCs. Through semi-structured interviews with SEA staff and staff at the relevant Comprehensive Centers (both RCCs and CCCs), the study team will collect information that describes the nature of technical assistance and capacity-building initiatives undertaken in the selected states and provides examples of how and why state capacity has or has not been expanded, as perceived by SEA officials.


To support the information needs of officials within ED, in light of the challenges and opportunities related to the recently adopted American Recovery and Reinvestment Act (ARRA), we have also added a protocol to learn about the potential needs for technical assistance that ARRA implementation poses for key SEA officials in states selected for the case study.


A.2 How the Information Will Be Collected, by Whom, and For What Purpose

Purpose of the Information Collection


The overall goal of the case studies described in this submission is to address one of the key concerns of the national evaluation of the Comprehensive Technical Assistance Centers – the extent to which the Comprehensive Centers have expanded SEA capacity to meet the goals of NCLB. SEAs are charged with several responsibilities related to the implementation of NCLB, among them:


  • Building or managing a statewide system of support (SSOS) for districts and schools identified for improvement under NCLB;

  • Training or managing the state-level staff or school support teams (SST) who provide support to districts and schools identified for improvement under NCLB;

  • Supporting use of assessment data by schools and districts;

  • Disseminating information on scientifically-based research to districts and schools;

  • Providing training and other professional development to local educators in academic subjects (reading/language arts, mathematics, science);

  • Monitoring compliance with NCLB requirements in districts and schools;

  • Communicating with parents or the public;

  • Implementing policies and practices for English language learners;

  • Administering supplemental education services and choice provisions;

  • Formulating state policies that respond to NCLB requirements; and

  • Providing technical assistance to LEA or other state-level staff.


This list of state responsibilities was identified through an analysis of the law’s key requirements for SEAs conducted by the study team. The first two responsibilities have been combined and, in the sections that follow, are referred to jointly as State Systems of Support/ School Support Teams (SSOS/SST).


For each state and its corresponding RCC identified as a case, our design calls for us to focus on two of these responsibilities.


Identifying Cases


In our design phase, and in light of the resource constraints we face, we have tried to develop a purposeful sampling strategy that will yield the most useful combination of 10 cases (defined as an RCC, an SEA, 2 NCLB topical areas and Content Centers, as relevant) to support the program improvement objectives of this data collection around SEA capacity building.


There are 3 initial considerations of which we were mindful in this selection process. First, we wanted to limit the cases we considered for selection to those where:


  • The underlying NCLB topical areas are priority areas for SEAs;

  • The Comprehensive Center network has provided substantial and recent technical assistance in these areas; and

  • The underlying activity included SEA capacity building as a significant component of the overall workplan.


To identify these cases, we relied on data provided by senior state managers1 in fall 2007 and fall 2008 based on their experience vis-à-vis the preceding Comprehensive Center program years.


Upon identifying possible cases for inclusion, we then analyzed the senior state manager data further and found the following preliminary themes emerging:


    • In terms of NCLB areas of responsibility, the most important area to SEAs for TA is SSOS/SST. For instance, 42 of the 51 SEAs (including the 50 states and the District of Columbia) reported receiving substantial TA in this area in 2007-08. No other NCLB area was identified by greater than 18 SEAs as having been a priority where substantial Center technical assistance had been provided in 2007-08. Based on this, we made a decision to examine this area in each of the 10 SEAs and their corresponding RCCs selected.

    • For each SEA, we developed a capacity building index (CBI) of reported capacity building experience by using senior state manager responses, again, in those areas that were a priority for technical assistance from the state’s perspective and in which substantial Center network technical assistance had been provided. We then developed for each RCC, based on all SEAs within its geographic area of responsibility, an average CBI, and ranked the Centers. Given the variation we observed here, and to provide as much coverage of the Center network as possible, we decided to select one SEA from each of 10 CC regions,

    • To capitalize within our case study research design on CBI variation, we chose to select SEAs from RCCs at both the high and low ends of the distribution, permitting us to look for common factors in terms of the capacity building experience among similarly rated Centers. To the extent that common themes emerge across similarly rated Centers and the three dimensions of capacity building on which we will gather data, our reporting will allow us to identify potentially useful lessons for program improvement.

    • Within each Regional Center, we will seek SEAs whose responses most closely reflect the overall perception of all SEAs served by that Region. Thus, among the highest ranking strata of RCCs, we will select SEAs that provided a relatively high ranking on capacity building, and vice versa.

    • We also identified substantial within-SEA variation in terms of capacity building across NCLB topical areas. Thus, our 2nd topical area within each SEA will be selected based on that area that offers the greatest contrast in reported success in capacity building with the 1st topical area (i.e., SSOS/SST, as explained above).

Defining Capacity Building


In addressing the question of what role the Comprehensive Center network has played in enhancing the capacity of the states to meet NCLB requirements, we employ definitions of key terms identified in recent literature by LeFloch et al. (2008) and Linnell (2003). 2


We define SEA capacity to implement NCLB as the SEA's ability to achieve its mission related to NCLB over the long-term without direct assistance from external agencies. This definition, drawn from LeFloch et al., reflects seven core elements, four of which we believe the CC system was designed by ED to influence and three of which are not part of ED's vision for SEA capacity building by the CC network:


  • The SEA’s financial resources;

  • Legislative and gubernatorial support;

  • Number of staff within the SEA dedicated to NCLB-related tasks;

  • The ability of the SEA to provide leadership in its work with schools and LEAs;

  • Technology, systems, and infrastructure in place in the SEA;

  • The knowledge, expertise, and skills of SEA staff; and

  • The SEA’s access to expertise in key areas related to its mission.

We will focus the case study on three (in bold) of the last four elements – those factors that we believe the Comprehensive Center system can influence – while acknowledging the significant role that the first three factors can play in the enhancement of an SEA’s capacity. We will not focus on one of these last four elements – the SEA’s leadership in its work with schools and LEAs – because our data collection for this study does not include interviews at the LEA or schools levels.

We define SEA capacity-building as the extent to which there is a change in the long-term ability of the SEA to achieve its core mission in future time periods. We make a distinction between capacity-building and technical assistance activities. When an organization alone cannot effectively perform certain tasks or functions, an outside agent or agency may instead perform them, or assist the organization in performing them. The provided services constitute technical assistance. Although technical assistance helps an organization achieve its mission, technical assistance may or may not build or “leave behind” any capacity within the organization. Our focus in the case study is to learn about that assistance which leaves behind capacity within the SEA.


A logic model can be used to illustrate the manner in which outside organizations (in this case, the organizations within the Comprehensive Center network) might go about enhancing the capacity of the SEAs to carry out NCLB requirements. A logic model spells out the theory – including inputs, strategies, outputs, outcomes, indicators and data collection – according to which the capacity-building agent believes change can be brokered. In some instances, these theories of change may be intentional and well-articulated in written documents – in this case, in the Annual Management Plans of the Comprehensive Centers. In other cases, these theories of change are not explicitly stated – they may be implied or understood. In others still, their theories of change may extend to technical assistance but fail to cover capacity-building.


To gain a fuller understanding of the issues surrounding NCLB capacity building that has occurred within SEAs as a result of Center actions, we plan to address the following research questions:


  • What specific approaches are used by the Comprehensive Centers in their work of enhancing state capacity to carry out the requirements of NCLB?


  • What are the structures and roles of the SEAs, RCCs and CCCs—individually and collectively—in building state capacity in high priority NCLB areas, including determining goals, implementing approaches, and measuring the success of capacity-building efforts?


  • To what extent is there evidence that the Comprehensive Center activities are associated with state reported capacity building in state high priority areas related to NCLB?


  • What specific processes, relationships, or activities are associated with Centers that were seen to have made a comparatively large contribution to expanding state capacity from those in which Centers’ contribution is seen as comparatively small?


  • How does the technical assistance received by SEAs from the Centers differ in approach and success in building capacity from other sources of technical assistance they have used?


The case study component of the evaluation will provide rich information about the capacity building efforts of the selected Centers and SEAs, though it will not be generalizable outside the scope of cases selected, given the purposive nature of site selection.


  • Likewise, the data collected regarding states concerns surrounding ARRA will provide ED with timely, detailed information on how states are responding to stimulus funding and what assistance and resources they need to optimally seek and utilize the salient funding streams.


How the information will be collected


The case studies will employ two primary data collection methods: analyses of the RCC and CCC Annual Management Plans and intensive interviews using structured systematic interviewing protocols.


The management plans for the RCCs and CCCs working with the selected SEAs will be reviewed by members of the evaluation team to extract data related to their plans for enhancing the capacity of SEAs, be it directly or indirectly stated. The evaluation team will be seeking to gather information on two topics – SSOS/SST and one other topic identified as a priority for the state.


Using a standardized format that conforms to the basic elements of logic models, for each SEA we will gather data on how the RCCs and CCCs approach capacity-building with the SEA. The basic elements will include inputs, strategies, outputs, outcomes, indicators and data collection strategies.


We will incorporate this information, collected from the management plans, and use it to help facilitate the discussion about capacity-building, giving the Center and SEA respondents the opportunity to verify, modify or expand upon it. Where the management plans do not yield sufficient information to complete a logic model, we will rely upon our interviews alone as the source of information about the Centers’ approach to capacity-building.


The interviews will be conducted during site visits to selected states and the relevant CCCs and RCCs. Interviews will be tape recorded, provided the respondents agree, and transcribed. The data collected from respondents will include either recollection of activities or experiences or, in some cases, their perceptions or opinions about the intention or outcomes resulting from specific processes or activities. Information based on perceptions and opinions of respondents will be appropriately identified as such. Our presentations of the gathered information will capture the input from all respondents on the questions drawn from the protocols for their respective institutions.


Information regarding states’ responses to ARRA will be collected during site visits to the selected SEAs. Interviewees will be asked to discuss their states’ reactions to ARRA, plans for utilizing these funds, and ways ED and the Center network can support them in their efforts. Again, all interviews will be tape recorded, provided the respondents agree, and transcribed.


Within states, whom would we select to interview?


To examine research questions regarding SEA capacity building, the study team will interview:


  • Commissioner, Deputy Commissioner, or Associate Commissioner of Education

  • Respondents at the next level down – e.g., the Title One Coordinator, Director of School Improvement, Director of Curriculum and Instruction.


  • Someone at the next level down – individuals detailed to work on the activities or projects associated with SSOS/SST and other key topics that will serve as the focus of the case studies.


  • To gather information on states responses to ARRA, the study team will interview one SEA official not interviewed about the SEA’s capacity building work with the Centers (see above). To identify the SEA official to be interviewed, the study team will seek guidance from the senior state manager as identified previously by the RCC.


Within the Comprehensive Center system, whom would we select to interview?


To examine research questions regarding SEA capacity building, the study team will interview:


  • RCC Directors


  • RCC staff assigned to the state (State Liaison or State Manager)


  • RCC staff person (s) dealing with state actors on the key issues that are the focus of the case studies – e.g., the lead person from the Center responsible for specific topics such as State Systems of Support.


  • CCC Directors working with the RCCs to build state capacity in a given task


  • CCC staff in direct contact with state staff


  • CCC staff working with RCC staff on this task


The RCCs and CCCs will not be interviewed regarding states’ responses to ARRA funding.


By whom will information be collected?


Two members of the evaluation team will be assigned to collect data for each case. One person will be identified as the case study leader for each case and will direct the scheduling of the case study data collection as well as the site write-up. All case study staff will be trained on the approved case study protocols, materials, and procedures required for the study. Staff members with extensive experience in conducting qualitative assessments and case studies in educational settings will be assigned to conduct this work.


A.3 Use of Improved Information Technology to Reduce Burden

This kind of information can only be obtained through one-on-one interviews. However, the contractor will collect information about the SEAs electronically prior to the site visits to the extent available (e.g., from websites, data files key staff can make available, etc.). In addition, digital recordings will be made of all interviews, provided the respondents agree, to facilitate accurate capture of the conversations. Using digital recorders may reduce the need for clarifying information through subsequent contacts with respondents.


A.4 Efforts to Identify and Avoid Duplication

The information to be collected by this data collection does not currently exist in a systematic format. Efforts are being made to coordinate and share documents with Centers’ own local evaluations in order to avoid duplication.


A.5 Efforts to Minimize Burden on Small Business or Other Entities

No small businesses will be involved as respondents. Every effort has and will be made to minimize the burden on RCC and CCC staff, SEA staff, and other state-level staff. Respondents not available during scheduled site visits will be able to complete the interviews by phone at their convenience.


A.6 Consequences of Less-Frequent Data Collection

This submission includes interviews with SEA and Comprehensive Center personnel conducted once during the evaluation period. These interviews will collect information not otherwise available to the evaluation.


A.7 Special Circumstances Requiring Collection of Information in a Manner Inconsistent with Section 1320.5(d)(2) of the Code of Federal Regulations

There are no special circumstances associated with this data collection.


A.8 Federal Register Comments and Persons Consulted Outside the Agency

In accordance with the Paperwork Reduction Act of 1995, the Institute of Education Sciences published a notice in the Federal Register announcing the agency’s intention to request an OMB review of data collection activities. The first notice was published in the Federal Register on January 27, 2009 (volume 74, page 4743) and provided a 60-day period for public comments. One public comment was received in the 60-day period.


The data collection instruments were developed by the evaluation research team led by Branch Associates, Inc. (BAI) with Decision Information Resources, Inc. (DIR) and Policy Studies Associates, Inc. (PSA). The interview protocols will be piloted with a small sample of respondents (fewer than 10) from one SEA and its associated RCC and CCCs.


A.9 Payments to Respondents

There will be no payments made to respondents. Experience on previous studies indicates that payments are not needed for this type of research.

A.10 Assurance of Confidentiality

Every effort will be made to maintain the privacy and/or confidentiality of respondents.


All respondents included in the study will be assured that the information they provide will be used only for the purpose of this research and that the information obtained through this study will be kept confidential to the extent provided by law.


Respondents will not be identified by name in reporting on the case study by IES. Interviews with respondents will be digitally recorded (audio only), provided the respondents agree, and transcribed. Digital files and transcripts will be kept on a secure computer and destroyed after the conclusion of the study.


To ensure data security, all individuals hired by BAI, DIR, and PSA are required to adhere to strict standards of confidentiality as a condition of employment. All project staff at BAI, DIR, and PSA will sign a confidentiality agreement that contains the following stipulations (see Appendix E for form):


  • I will not reveal the name, address or other identifying information about any respondent to any person other than those directly connected to the study.


  • I will not reveal the contents or substance of the responses of any identifiable respondent or informant to any person other than a member of the project staff, except for a purpose authorized by the project director or authorized designate.


  • I will not contact any respondent or informant except as authorized by a member of the project staff.


  • I will not release a dataset or findings from this project (including for unrestricted public use or for other, unrestricted, uses) except in accordance with policies and procedures established by the project director or authorized designate.


Assurances of the voluntary nature of participation and confidentiality of responses will be included in the letters sent to the agencies.


A.11 Questions of a Sensitive Nature

The questions included on the data collection instruments for this study do not involve sensitive topics. No personal information is requested.


A.12 Estimates of Respondent Burden

Exhibit 3 on the following page presents our estimate of the reporting burden for case study respondents.


The pilot case study will involve one state and its associated RCC and CCC(s). Fewer than 10 respondents will participate in the pilot study.


A.13 Estimates of the Cost Burden to Respondents

There are no start-up costs to respondents.


A.14 Estimates of Annualized Government Costs

The total cost to the Federal government for the National Comprehensive Technical Assistance Centers Evaluation is $7,203,836, and the annual cost is $754,568 in FY 2007; $1,872,485 in FY 2008; $2,649,789 in FY 2009; $1,682,018 in FY 2010; and $244,976 in FY 2011. Of that total, approximately $573,088 (in FY 2008, 2009 and 2010) will be used for the data collection activities for which clearance is currently being requested; an annual cost to the government of $191,029.


A.15 Changes in Hour Burden

The total annual data collection burden presented in the first two OMB Clearance Requests submitted for this project was 1,703 hours. There is a program change of 240 burden hours for the addition of four new instruments associated with the one-time case study to the data collection.


Exhibit 3: Estimates of Respondent Burden

Informant

Number of Responses

Number of Rounds

Average Time Per Response (Hours)

Total Respondent Burden (Hours)

Estimated Hourly Wage (Dollars)a

Estimated Lost Burden to Respondents (Dollars)

SEA Interviews3







Chief State School Officer (or designee)

20

1

1

20

$53.05

$1061

Senior State Managers

20

1

2

40

$53.05

$2122

State-level Project Staff

20

1

2

40

$38.55

$1542

Total

60



100


$4725

RCC Staff Interviews4







RCC Director

10

1

1

10

$53.05

$530.50

RCC Senior Staff

30

1

2

60

$38.55

$2313

Total

40



70


$2843.50

CCC Staff Interviews5







CCC Director

10

1

1

10

$53.05

$530.50

CCC Senior Staff

30

1

2

60

$38.55

$2313

Total

40



70


$2843.5








Total

140



240


$10,412

Notes:

a. Assumed salary of GS-13 of $80,200 annually for state-level project participants and RCC staff. Assumed salary of “Senior Executive Service” of $110,294 annually for Senior State Managers and Chief State School Officers.


A.16 Time Schedule, Publication, and Analysis Plan

Time Schedule and Publication of Reports


The schedule shown below in Exhibit 4 displays the sequence of activities required to conduct these information collection activities and includes key dates for activities related to instrument design, data collection, analysis, and reporting. The italicized activities are the subject of the current OMB submission.


Exhibit 4

Time Schedule

Activities and Deliverables

Date

Instrument Design (RCC and CCC Staff Site Visit Interview Protocols, Project Inventory Form, Request for Materials for Expert Panel Review)6

Fall 2006

Sampling and Analysis Plan (description of the sampling plan for documents and services on the inventory that will be selected for review by independent panels, plans for survey sampling and analysis)

Winter 2007

Instrument Design (Surveys of SEA staff & RCC staff)7

Winter/Spring 2007

Site visits with Centers and Training on Data Collection Forms

Spring 2007

First meeting of Review Panels

Fall 2007

First survey of SEA and RCC staff

Fall/Winter 2007-08

Instrument Design (Case Study Protocols)

Fall 2008

Second meeting of Review Panels

Fall 2008



Second survey of SEA and RCC staff

Fall/Winter 2008-09

Case Study Site Selection

Winter/Spring 2009

Case Study Site Visits

Summer 2009

Third meeting of Review Panels

Fall 2009

First Report

Fall 2009

Third survey of SEA and RCC staff

Fall 2009

Second Report

Early 2010

Final Report (including case study results)

Early 2011



Preparation of Data Collected Through Case Studies

All interviews will be audio-recorded and transcribed, provided the interviewees agree. Site visitors will use the resulting transcripts to prepare detailed summaries of each case (i.e., each state). These write-ups, along with the actual transcriptions, will be imported into NVIVO software, which will facilitate coding, organizing and analyzing the data collected as part of the case studies.


The study team will systematically code these data using the process described below. This process, designed to quantify documentary data, has been used successfully on other studies to code and analyze large quantities of open-ended responses.


The first stage of the analysis begins with site debriefing sessions in which patterns and trends are identified by the site visiting team. Themes emerging from this process will be matched with the research questions to determine that all areas of research are covered. This foundation will be further developed into a list of possible codes that will be used during the coding process and incorporated into the NVIVO coding system.


Once the initial codes are developed, the study team will train the coders for inter-coder reliability. Sample text will be reviewed and discussed in the training, and all coders will be briefed on the procedures for coding site write-ups and interview transcripts and discrepancy resolution.


All site write-ups and transcripts will be coded using the NVIVO software and the coding scheme developed by the study team. All data will be coded using two coders who will independently code assigned research topics across all 10 cases. This process maximizes coding consistency across cases. As coders work through the interview data, it may become clear that one or more codes should be added to the initial set of codes. Codes added during the coding process are likely to be responses common to several cases.


Once the coding teams have coded responses on all topics, the study team will review all new codes and determine whether it is truly necessary to add these codes. The study team will also compare the coding results from the two independent coders to determine any discrepancies. In cases where the coders do not agree, a senior study team member will work with each of the coders on that topic to resolve the discrepancy. The outcome of this process is a final set of codes for each case and each research topic.


The final coded documents will then be reviewed to determine trends and patterns in the capacity-building work of the Centers across the selected SEAs, as described in the preliminary analysis plans described next.


Analysis Plans for Case Study Data


The overall goals of the case study analysis are to identify which Center services have built the capacity of SEAs, which dimensions of capacity are reported as having been expanded, and whether the focus of capacity-building varies across NCLB priority areas. The proposed analysis approach will:


  • describe similarities and differences in capacity-building needs and activities across cases (states) and NCLB topics;


  • describe the types of capacity-building activities that are provided and received, within areas of identified need; and


  • identify the extent to which Center work is perceived as having resulted in SEA’s expanded long-term capacity within each priority topic area in meeting NCLB requirements.


The first stage of the analysis will result in a series of descriptive tables identifying the extent to which capacity building needs and activities are similar or different across the cases within and across NCLB topic areas. For example, within the area of SSOS, we would report on the number of SEAs that reported needing to expand their capacity in each of the 3 key dimensions: technology, systems and infrastructure; the knowledge, expertise and skills of SEA staff; and access to experience in key areas related to the SEA’s mission. We will also look at needs identified across topical areas to determine the extent to which needs appear to be unique to a specific topic area or whether similar needs are identified regardless of the NCLB topic. Exhibit 5 shows how this information might be reported.


Exhibit 5. Number of Sampled SEAs Identifying Capacity Building Needs, by Topic Area

Capacity Building Needs

NCLB Topic Area

SSOS/SST

Topic 2

Topic 3

Systems/Infrastructure

7



Skills & Knowledge

10



Access to Experience

4



Total Number of SEAs*

10

X

Y

*Given the prevalence of SSOS/SST as a priority area, all 10 cases will provide information related to this topic; other topic areas will be investigated with fewer cases, with the exact number depending on which NCLB priority areas provide the best within-state contrasts when compared with SSOS/SST capacity outcomes.


The next step in the analysis will be to describe the types of capacity building activities Centers provided and SEAs received across topical areas within an identified capacity need. Exhibit 6 illustrates how this information might be summarized for one capacity building dimension—building technology, systems, and infrastructure.


Exhibit 6. Number of Sampled SEAs Receiving Capacity Building Services from Centers Related to Building Technology, Systems and Infrastructure*

Capacity Building Activities

Systems/Infrastructure

SSOS/SST

Topic 2

Topic 3

Opportunities to audit key NCLB activities led by experienced RCC or CC staff




Opportunities to shadow knowledgeable individuals on the RCC or CC staff




Introduction to key research or practice literature or knowledge base




Executive coaching or mentoring from RCC or CC staff












Etc.




Total Number of SEAs




*Similar exhibits would be prepared for each capacity building dimension


The next stage of the analysis will focus on combining data from the Senior State Manager Survey relating to perceptions of the extent to which work with the Centers has resulted in expanded capacity in the longer term within each priority topic area. In particular, we will look for patterns with respect to needs identified –Do SEAs reporting their capacity was built to a great extent in a particular priority area identify similar types of capacity building needs for that priority area? With respect to types of capacity building activities – do SEAs that reported their capacity was built to a great extent in a particular priority area receive similar types of capacity building assistance from Centers? How do needs and/or activities differ for priority areas with capacity built to a great extent or a small extent? This same type of comparative analysis would be applied to the use of additional TA sources as well—whether alternative TA sources were used or not. Again, this analysis will look for patterns both within and across NCLB topic areas.


We will follow a similar process for examining whether Center services helped SEAs meet their short-term NCLB requirements or not, within each of the four capacity building dimensions. This will be examined both within (e.g., for SSOS/SST, etc.) and across NCLB priority topic areas. In addition, for SSOS/SST and each of the other priority areas, we will examine whether responses are systematically different for SEAs who reported their capacity was built to a great extent vs. those who responded their capacity had been built to a small extent.


These tabulations, taken together, will provide evidence of the extent to which Center services have built the capacity of SEAs, which dimensions of capacity are reported as having been expanded, and whether the focus of capacity-building varies across NCLB priority areas. We will also be able to examine, for each NCLB topical area, whether the specific activities and/or the focus of capacity-building efforts are different for those SEAs who reported their capacity has been expanded to a great extent and those who reported their capacity has been expanded only to a small extent.


The analysis will also examine the evidence of capacity building reported by SEA respondents and the extent to which reports of capacity building during the case study interviews is consistent with the assessment of the extent of capacity building reported on the senior state manager survey. Particularly given the passage of time between fielding the senior state manager survey and conducting case study interviews, SEA officials’ assessment of the extent of capacity building may have shifted over time.


The case studies will collect data from multiple respondents within each SEA, RCC and CCC. The possibility exists that responses will not be consistent across respondents in each case. Multiple respondents are being interviewed because we presume that no one person has knowledge relevant to capacity building needs and activities for the two priority areas examined for each case; thus, some respondents may report needs, activities, or outcomes that are not reported by other respondents within that case. In the event that one SEA respondent identifies a particular need, our analysis will include that need as existing within the SEA regardless of information provided by other respondents at that SEA. Similarly, if one respondent reports that a specific capacity building activity was engaged in with an RCC or CCC, we will report that activity as having occurred, regardless of whether other respondents also identified that particular activity.


Finally, the case study will collect data from RCC and CCC staff who work with SEA staff in the NCLB topic areas identified for the study. Our analysis will also tabulate the needs identified and capacity building activities identified by RCC and CCC staff and report patterns within and across dimension of capacity building and within and across NCLB topic areas, from the perspective of Center staff. Thus, the analysis will also examine the congruence (and divergence of perspective) between Center staff reports of the capacity building activities and SEA staff reports.


A.17 Display of Expiration Date for OMB Approval

Institute of Education Sciences is not requesting a waiver for the display of the OMB approval number and expiration date on the data collection instruments. All data collection instruments will display the expiration date for OMB approval.


A.18 Exceptions to Certification Statement

This submission does not require an exception to the Certificate for Paperwork Reduction Act (5 CFR 1320.9)




1 Respondents to the senior state manager survey were asked: When your state requests technical assistance from outside sources, what are its priorities? To what extent is each of the following state responsibilities related to NCLB implementation a priority for the technical assistance the state requests? Respondents indicated whether each of 11 NCLB responsibilities was a major priority, moderate priority, minor priority or not at all a priority for their state.



2 Our definitions largely draw on those of LeFloch, K.C., Boyle, A., & Therriault, S.B. (2008). Help Wanted: State Capacity for School Improvement. Washington, DC: American Institutes for Research and Linnell, D. (2003). Evaluation of Capacity Building: Lessons from the Field. Washington, DC: Alliance for Nonprofit Management.



3 The number of responses assumes we will interview the Chief State School Officer (or his/her designee) and one Senior State Manager for each of two NCLB topic areas, plus two project-level SEA respondents across a total of 10 states. One senior-level official will also be interviewed in each SEA regarding ARRA, for a total of 10 respondents. Thus burden estimate is shown in Chief State School Officer (or his/her designee).

4 The number of responses assumes we will interview the RCC Director associated with each selected state, the RCC state liaison assigned to work with each selected state, plus content expert staff who work on each of two NCLB topic areas for each of the 10 selected states.

5 The number of responses assumes we will interview the relevant CCC Director and Senior CCC staff expert in each of two NCLB topic areas for each of the 10 selected states.

6 These data collection instruments have already been cleared by OMB (OMB No. 1850-0823).

7 These data collection instruments have already been cleared by OMB (OMB No.1850-0823), revised submission.

File Typeapplication/msword
File TitleAbt Double-Sided Body Template
AuthorAbt Associates Inc
Last Modified By#Administrator
File Modified2009-04-14
File Created2009-04-14

© 2024 OMB.report | Privacy Policy