Att_CTAC Survey OMB Part A 12-6-07

Att_CTAC Survey OMB Part A 12-6-07.doc

National Evaluation of the Comprehensive Technical Assistance Centers

OMB: 1850-0823

Document [doc]
Download: doc | pdf

National Evaluation of the Comprehensive Technical Assistance Centers



Statement for Paperwork Reduction Act Submission


PART A: Justification



Contract ED-04-CO-0028







December 4, 2007





Prepared for

Institute of Education Sciences

U.S. Department of Education


Prepared by

Branch Associates, Inc.

Policy Studies Associates, Inc.

Decision Information Resources, Inc.


Contents




Part A: Justification

This is the second of two clearance requests submitted to OMB for the National Evaluation of the Comprehensive Technical Assistance Centers (“Centers”). OMB has approved the following data collection instruments, which are described in the evaluation’s first OMB submission (OMB No. 1850-0823, dated January 22, 2007):

    • Regional and Content Center staff interview protocols and procedures for conducting site visits to the Comprehensive Centers

    • Project Inventory Form and procedures for selecting a sample of Comprehensive Center projects to be rated by expert review panels for technical quality

    • Request for materials for expert panel review


This second OMB submission requests approval for the following client surveys:

      • Survey of State-Level Project Participants

      • Survey of RCC-Level Project Participants

      • Survey of Senior State Managers


A.1 Explanation of Circumstances That Make Collection of Data Necessary

Program Background

The 21 Comprehensive Centers provide support for the implementation of the No Child Left Behind Act (NCLB). The Educational Technical Assistance Act of 2002 authorized the Comprehensive Centers to provide technical assistance with NCLB implementation and the improvement of academic achievement. It gave discretion to the U.S. Department of Education (ED) to determine the priorities of the Centers (Sec. 207 of the Act). Using this authorization, ED designed the system of Centers now operating under cooperative agreements. First, ED charged the Centers with serving states as their primary focus because of the states’ pivotal role in supporting district and school implementation of NCLB. Second, ED established a two-tiered system of 16 Regional Comprehensive Centers (RCCs) and 5 Content Centers (Federal Register, June 3, 2005, p. 32583).

The 16 RCCs were designed to provide NCLB-related services to states in their geographic regions. Some serve only one state; others serve as many as five or six. They are expected to work closely with each state, assessing needs and assisting in many tasks, such as building statewide systems of support for districts and schools. The Regional Centers are expected to use the Content Centers as a major source of content expertise.

Under the terms of their cooperative agreements with ED, the 5 Content Centers were charged with providing research-based information, products, guidance, and knowledge on the following topics: (1) assessment and accountability; (2) instruction; (3) teacher quality; (4) innovation and improvement; and (5) high schools. The Content Centers, in turn, are expected to work closely with the Regional Centers and to develop products for them to use with states. The Content Centers may also work directly with state-level staff. The Content Centers must identify and translate research knowledge, communicating it “in ways that are highly relevant and highly useful” for state and local policy and practice (Federal Register, June 3, 2005, p. 32586). Annual funding for the Content and the Regional Centers totals over $59 million dollars per year.


Overview of the Evaluation


The National Evaluation of the Comprehensive Centers is Congressionally mandated under Title II of the Education Technical Assistance Act of 2002 (Section 204), Public Law U.S. 107-279. Title II requires that the National Center for Education Evaluation and Regional Assistance (NCEE), a division of the Department's Institute of Education Sciences, provide for ongoing independent evaluation of the Comprehensive Centers.


NCEE has designed this evaluation both to meet the requirements of the legislative mandate (described below) and to support program improvement. The evaluation will collect information about each Center’s annual work plans, their collaboration with other ED-supported technical assistance providers and with each other and the States they serve, and the products and services they provide. Independent experts will review the quality of the Center’s services and products and clients will complete surveys about their experience with the same set of services and products.


The statute established the following specific goals for the evaluation:


  • to analyze the services provided by the Centers

  • to determine the extent to which each of the Centers meets the objectives of its respective plan

  • to determine whether the services offered by each Center meet the educational needs of State educational agencies (SEAs), LEAs, and schools in the region.


A major objective of the evaluation is to assess the quality, relevance, and usefulness of the products and services produced by the Centers. The evaluation team has developed definitions of quality, relevance, and usefulness through consultation with the evaluation’s TWG, Department of Education staff, and review of existing measures.


Following procedures described in the evaluation’s first OMB submission, each Comprehensive Center has compiled a complete inventory of the projects undertaken in the previous program year. For the purposes of this evaluation, a “project” comprises a group of closely related activities and/or deliverables designed to achieve a specific outcome for a specific audience. The Comprehensive Centers have grouped their products and services in this way, at our request, to support sampling and data collection by the evaluation; they do not necessarily use the “project” as a construct in managing their work internally.


A sample of 127 projects (4-8 per Center) will be drawn from an inventory of projects compiled in each of the evaluation’s three years.1 Materials and other artifacts associated with those projects will be presented to panels of independent, highly-qualified experts to be rated for technical quality.2 In addition, a sample of participants in Comprehensive Center projects will be asked to rate those same 127 projects for relevance and usefulness on client surveys. In this way, expert panel ratings of quality and client ratings of relevance and usefulness can be presented on the same set of projects.


In addition, the evaluation will describe the goals, structure, and operations of the 21 Comprehensive Centers, the extent to which Comprehensive Center assistance addresses state’s priorities for technical assistance, and the extent to which Comprehensive Center assistance has expanded state capacity to implement key provisions of NCLB. The evaluation will track the performance of the Comprehensive Centers over three program years (2006-07, 2007-08 and 2008-09), providing the Department of Education with descriptive information and measures of performance that can be used to inform program improvement.


Data collected from the evaluation that will be used by the Department of Education to measure the performance of the Centers, as required under the Government Performance and Results Act (GPRA). The performance measures that the Department of Education will apply to the Centers are:


  • the percentage of all Comprehensive Centers’ products and services that are deemed to be of high quality by an independent review panel of qualified experts or individuals with appropriate expertise to review the substantive content of the products and services;

  • the percentage of all Comprehensive Centers’ products and services that are deemed to be of high relevance to educational policy or practice by target audiences;

  • the percentage of all Comprehensive Centers’ products and services that are deemed to be of high usefulness to educational policy or practice by target audiences.


A.2 How the Information Will Be Collected, by Whom, and For What Purpose

Evaluation Questions


The client surveys described in this submission will address five evaluation questions:


  1. To what extent have SEAs relied on other sources of technical assistance besides the Centers? Which other sources? How does the usefulness of Center projects compare with the usefulness of projects from other sources?


  1. To what extent have Center projects expanded SEA or Regional Center capacity to address underlying needs and priorities and meet the goals of NCLB?


  1. To what extent is the work of each Comprehensive Center of high relevance and of high usefulness?


  1. Has the performance of Centers in addressing underlying needs and priorities changed over time?


  1. Has the relevance or usefulness of each Center’s projects changed over time?


As noted in the evaluation’s first OMB submission, data collected from site visits, project inventory forms and expert review panels will be used to answer the following evaluation questions:


  1. What are the objectives of each Center?


  1. What kinds of products and services are provided by each Center?


  1. How do the Centers define their clients’ educational needs and priorities? (“Clients” refers to SEA staff for the Regional Centers and Regional Center staff for the Content Centers.) How do Center clients (SEAs or Regional Centers) define their needs and priorities?


  1. To what extent is the work of each Comprehensive Center of high quality?


  1. Has the quality of each Center’s projects changed over time?


Exhibit 1 below summarizes the data collection activities planned for the evaluation.


Exhibit 1: Data Collection Plan

Instrument

Respondent

Spring - Summer 2007

Fall 2007

Spring – Summer 2008

Fall 2008

Spring – Summer 2009

Fall 2009

Key Data

Regional Center Staff Site Visit Interview Protocol*

Regional Center Directors

X






Descriptive information about centers’ goals, structure and operations.

Content Center Staff Site Visit Interview Protocol*

Content Center Directors

X






Descriptive information about centers’ goals, structure and operations.










Project Inventory Form*

Regional and Content Center Directors

X


X


X


A list of the products and services provided by each Center.










Request for Materials for Expert Review Panel*

Regional and Content Center Directors

X


X


X


Documents and artifacts associated with projects selected for rating by review panel.










Survey of State-Level Project Participants

SEA staff, intermediate education agency staff


X


X


X

Ratings of the relevance and usefulness of Center projects; data on capacity to meet the goals of NCLB










Survey of RCC-Level Project Participants

RCC staff


X


X


X

Ratings of the relevance and usefulness of Center projects; data on capacity to meet the goals of NCLB










Survey of Senior State Managers

Senior SEA managers


X


X


X

Description of state needs, data on other sources of assistance, ratings of the usefulness of other sources of assistance, ratings of the relevance and usefulness of Center projects, data on SEA capacity


*These data collection instruments were previously submitted and approved. The OMB number is 1850-0823.



Surveys for two types of respondents are planned for the evaluation:


  • Project participants who are the end-users of Center assistance. They serve on task forces and work groups facilitated by the Centers and participate in Center-sponsored conferences, consultations, and other technical assistance activities. Project participants will be sampled from the same set of Center projects that will be presented to expert review panels for ratings of quality in accordance with the plan outlined and approved in our first OMB submission.3 They will be asked to provide information that will support ratings of those projects for relevance and usefulness, and to provide a range of other project-level feedback. Project participants may be SEA staff, intermediate education agency staff, or local educators working on behalf of states, or (in the case of Content Center projects) they may be RCC staff. The study team has developed two parallel survey forms for project participants: one for state-level staff who have responsibility for implementing NCLB provisions, and one for RCC staff who are responsible for providing assistance to states as they implement NCLB provisions.


  • Senior state managers who oversee the Centers’ work in each state. They themselves are not typically end-users of the Centers’ assistance, but they negotiate with the Centers to ensure that the Centers’ technical assistance corresponds to state priorities. Looking across all of the assistance received from the Comprehensive Center system (including the regional center serving the state and any of the Content Centers with which the state has worked), senior managers will report on the extent to which the assistance has addressed state needs and built the state’s capacity to implement NCLB provisions. Senior managers will also provide global ratings of the relevance and usefulness of the assistance the state has received from the Centers. They will also report on the state’s use of other sources of technical assistance.


This proposed information collection includes three surveys: 1) a survey of state-level participants in Comprehensive Center projects; 2) a survey of RCC participants in Comprehensive Center projects; and 3) a survey of senior state managers.


  1. Survey of State-Level Project Participants


Respondents will be a sample of those state-level staff members who have participated in the Comprehensive Center projects that have been sampled for expert panel review. All of these state-level staff have responsibility for implementing NCLB and/or providing assistance to districts and schools on behalf of the state. For the purposes of this evaluation, “state-level” staff include employees of state education agencies (SEAs), staff of and intermediate education agencies who provide assistance to districts and schools on behalf of the state, and local educators (e.g., district superintendents, principals, teachers) who serve on state-sponsored school support teams or state-level task forces. In those Comprehensive Center projects where local educators participate because of their role in expanding state capacity, they will be included in the survey sample as “state-level” respondents.


  1. Survey of RCC-Level Project Participants


Respondents will be RCC staff who have participated in Content Center projects that have been sampled for expert panel review. All of these RCC staff are responsible for providing technical assistance to state-level staff, who, in turn, are responsible for state implementation of NCLB and for providing assistance to low-performing schools. The survey of RCC-level project participants is nearly identical to the survey for state-level project participants; the item stems have been adapted slightly to reflect the responsibilities of RCC staff.


  1. Survey of Senior State Managers


Respondents for this survey are senior managers who are familiar with all Comprehensive Center projects in the state, who negotiate the center’s work plan for the state, and who supervise staff participating in individual projects. Typically, these senior state managers do not participate directly in projects.


A.3 Use of Improved Information Technology to Reduce Burden

We will administer the surveys on the web so that they are easily accessible to SEA and Center staff and thereby minimize burden. For those staff who prefer, the survey will also be available in hard copy.


A.4 Efforts to Identify and Avoid Duplication

The information to be collected by this data collection does not currently exist in a systematic format. Efforts are being made to coordinate and share documents with Centers’ own local evaluations in order to avoid duplication.


A.5 Efforts to Minimize Burden on Small Business or Other Entities

No small businesses will be involved as respondents. Every effort has and will be made to minimize the burden on Comprehensive Center staff, state education agency staff, and other state-level staff. Respondents will be able to complete the survey at their convenience (within a specified time period).


A.6 Consequences of Less-Frequent Data Collection

This submission includes surveys that will be administered once a year for three years. This data collection is necessary to report the percentage of participants that deem Comprehensive Center projects to be of high relevance and of high usefulness to educational policy or practice, among the same sample of Center projects reviewed by expert panels for quality. Less frequent data collection for these items would make it impossible to rate the relevance and usefulness of center products annually – central in the Department of Education’s efforts to measure Centers’ performance and changes in their performance over time.


A.7 Special Circumstances Requiring Collection of Information in a Manner Inconsistent with Section 1320.5(d)(2) of the Code of Federal Regulations

There are no special circumstances associated with this data collection.


A.8 Federal Register Comments and Persons Consulted Outside the Agency

In accordance with the Paperwork Reduction Act of 1995, the Institute of Education Sciences published a notice in the Federal Register announcing the agency’s intention to request an OMB review of data collection activities. The first notice was published in the Federal Register, Volume 72, page 39804 on July 20, 2007 and provided a 60-day period for public comments.


One public comment was received from the Federal Register notice. We have addressed all of the issues raised in this comment in revisions to the surveys submitted with this clearance package. A detailed response to this comment has been submitted under separate cover.


The data collection instruments were developed by the evaluation research team led by Branch Associates, Inc. (BAI) with Decision Information Resources, Inc. (DIR) and Policy Studies Associates, Inc. (PSA). The surveys were piloted with 9 state-level project participants, 9 RCC staff members, and 6 senior state managers in Spring / Summer 2007.

A.9 Payments to Respondents

There will be no payments made to respondents. Experience on previous studies indicates that payments are not needed for this type of research.

A.10 Assurance of Confidentiality

An explicit statement regarding confidentiality will be communicated to all survey respondents. The following statement is included on the cover page of each survey, and on the letter that will be sent to all individuals in the survey samples:


Per the Education Sciences Reform Act of 2002, Title I, Part E, Section 183, responses to this data collection will be used only for statistical purposes.  The reports prepared for this study will summarize findings across the sample and will not associate responses with a specific district or individual.  We will not provide information that identifies you or your organization to anyone outside the study team, except as required by law. 


Steps to Ensure that Confidentiality Will Be Maintained


Branch Associates, and its subcontractors, Policy Studies Associates and Decision Information Resources, will be following the new policies and procedures required by the Education Sciences Reform Act of 2002, Title I, Part E, Section 183 requires "All collection, maintenance, use, and wide dissemination of data by the Institute" to "conform with the requirements of section 552 of title 5, United States Code, the confidentiality standards of subsection (c) of this section, and sections 444 and 445 of the General Education Provision Act (20 U.S.C.  1232g, 1232h)."  These citations refer to the Privacy Act, the Family Educational Rights and Privacy Act, and the Protection of Pupil Rights Amendment. 

Branch Associates, and its subcontractors, Policy Studies Associates and Decision Information Resources, will protect the confidentiality of all information collected for the study and will use it for research purposes only.  No information that identifies any study participant will be released.  Information from participating institutions and respondents will be presented at aggregate levels in reports.  Information on respondents may be linked to their institution but not to any individually identifiable information.  No individually identifiable information will be maintained by the study team. All institution-level identifiable information will be kept in secured locations and identifiers will be destroyed as soon as they are no longer required.


No individually identifying information will be collected on any survey. Each response will be assigned an ID number, and survey data will be compiled in databases by ID number, without any information (name, affiliation, or contact information) that could be used to identify individual respondents. Files that link survey ID numbers to respondent names and contact information will be used only during survey administration, to track survey responses and to follow up with non-respondents. Once survey administration is complete, all files containing identifying information will be destroyed.


During the initial review of narrative responses to open-ended items, we will remove all identifying information from each narrative response, including names of states, state agencies, Comprehensive Centers, individuals, and other information that could potentially be used to identify respondents or the states that respondents represent, such as unique or unusual job titles, program names, or other details about local context. Identifying information will be replaced with the appropriate generic term(s), such as “[state]” or “[Comprehensive Center],” set off in brackets so that the nature of the substitution is clear.


Every effort will be made to balance the evaluation’s legislative mandate to report results for each Comprehensive Center with the need to assure confidentiality for individual respondents. Reports on the evaluation will not name individuals and will not include any information that could be used to identify individual respondents.


We will not report results from the Survey of Senior State Managers (either tabulations of close-ended items or compilations of narrative responses from open-ended items) by Center. (A letter sent to senior state managers about the survey will explain that the evaluation will not report results by Center.) We expect that every Comprehensive Center, including smaller Centers and those that serve just one state, will have at least 10 individuals included in the project participant survey samples. Thus, it will be possible to report the results of these surveys by Center without compromising the confidentiality of any individual respondents, and without giving state-level or RCC staff cause to believe that they might be jeopardizing their future relationship with their Center if they respond candidly to survey questions.


Although every measure will be taken to protect the confidentiality of the data collected, confidentiality can not be guaranteed.


Project Staff Confidentiality Agreement


To ensure data security, all individuals hired by BAI, DIR, and PSA are required to adhere to strict standards of confidentiality as a condition of employment. All project staff at BAI, DIR, and PSA will sign a confidentiality agreement that contains the following stipulations (see Appendix E for form):


  • I will not reveal the name, address or other identifying information about any respondent to any person other than those directly connected to the study.


  • I will not reveal the contents or substance of the responses of any identifiable respondent or informant to any person other than a member of the project staff, except for a purpose authorized by the project director or authorized designate.


  • I will not contact any respondent or informant except as authorized by a member of the project staff.


  • I will not release a dataset or findings from this project (including for unrestricted public use or for other, unrestricted, uses) except in accordance with policies and procedures established by the project director or authorized designate.



A.11 Questions of a Sensitive Nature

The questions included on the data collection instruments for this study do not involve sensitive topics. No personal information is asked.


A.12 Estimates of Respondent Burden

Exhibit 2 on the following page presents our estimate of the reporting burden for survey respondents. Time estimates are based on early results from survey pilot tests. We request continued approval for the 210 responses and 1,071 annual hours from the currently approved information collection through 2009. The annualized response for this newly revised portion is 1,914 and the annualized hour burden is 632. This gives an overall total of 2,124 responses and 1,703 burden hours.

A.13 Estimates of the Cost Burden to Respondents

There are no annualized capital/startup or ongoing operation and maintenance costs associated with collecting the information. In addition to their time, which is estimated in Exhibit 2, there are no other direct monetary costs to respondents


A.14 Estimates of Annualized Government Costs

The total cost to the Federal government for the National Comprehensive Technical Assistance Centers Evaluation is $6,630,085, and the annual cost is $887,228 in FY 2007, $1,962,373 in FY 2008, $1,827,046 in FY 2009 and $1,953,439 in FY 2010. Of that total, approximately $505,132 will be used for the data collection activities for which clearance is currently being requested, an annual cost to the government of $168,377.


A.15 Changes in Hour Burden

The total annual data collection burden presented in the original OMB Clearance Request submitted for this project was 1,071 hours. There is a program change of 632 burden hours per year due to the addition of three new instruments to the data collection. This is an increase in the total annual data collection burden for each year of the study estimated at 632 hours for 1,914 respondents and an average of approximately .33 hours per respondent.



Exhibit 2: Estimates of Respondent Burden

Informant

Number of Responses

Number of Rounds

Average Time Per Response (Hours)

Total Respondent Burden (Hours)

Estimated Hourly Wage (Dollars)a

Estimated Lost Burden to Respondents (Dollars)

Survey of State-Level Project Participants4







Fall 2007

1,584

1

0.33 hours

523

$37.55

$19,639

Fall 2008

1,584

1

0.33 hours

523

$37.55

$19,639

Fall 2009

1,584

1

0.33 hours

523

$37.55

$19,639

Total

4,752



1,569

$37.55

$58,917

Survey of RCC-Level Project Participants5







Fall 2007

204

1

0.33 hours

67

$37.55

$2,516

Fall 2008

204

1

0.33 hours

67

$37.55

$2, 516

Fall 2009

204

1

0.33 hours

67

$37.55

$2,516

Total

612



201


$7,548

Senior State Managers6







Fall 2007

126

1

0.33 hours

42

$52.79

$2,217

Fall 2008

126

1

0.33 hours

42

$52.79

$2,217

Fall 2009

126

1

0.33 hours

42

$52.79

$2,217

Total

378



126


$6,651








Total

5,742



1,896


$73,116

Notes:

a. Assumed salary of GS-13 of $78,111 annually for state-level project participants and RCC staff. Assumed salary of “Senior Executive Service” of $109,808 annually for Senior State Managers.


A.16 Time Schedule, Publication, and Analysis Plan

Time Schedule and Publication of Reports


The schedule shown below in Exhibit 3 displays the sequence of activities required to conduct these information collection activities and includes key dates for activities related to instrument design, data collection, analysis, and reporting.



Exhibit 3

Time Schedule

Activities and Deliverables

Date

Instrument Design (Regional and Content Center Staff Site Visit Interview Protocols, Project Inventory Form, Request for Materials for Expert Panel Review)7

Fall 2006

Site visits with Centers and Training on Data Collection Forms

Spring 2007

Instrument Design (Surveys of SEA staff & Regional Center staff)

Winter/Spring 2007

Sampling and Analysis Plan (description of the sampling plan for documents and services on the inventory that will be selected for review by independent panels, plans for survey sampling and analysis)

Winter 2007

First meeting of Review Panels

Fall 2007

First survey of SEA and Regional Center staff

Fall/Winter 2007-08

First Report

Spring 2008

Second meeting of Review Panels

Fall 2008

Second survey of SEA and Regional Center staff

Fall/Winter 2008-09

Second Report

Spring 2009

Third meeting of Review Panels

Fall 2009

Third survey of SEA and Regional Center staff

Fall/Winter 2009-10

Final Report

Summer 2010



Analysis of Data Collected Through Surveys

Data collected through client surveys will serve two purposes:


  • Descriptive analysis of Comprehensive Center assistance, to be used in reporting to Congress on the extent to which the services offered by Comprehensive Center have met the needs of states as they carry out their responsibilities under NCLB


  • Ratings of relevance and usefulness, to be used by the Department of Education for GPRA reporting


Unit of Analysis


For the surveys of project participants (Survey of State-Level Project Participants and Survey of RCC-Level Project Participants), the primary unit of analysis will be the participant. We will report survey data as the percentage of participants, across all projects, responding in specific ways to survey items. For example, we will report the percentage of participants who judge Comprehensive Center projects to be of high relevance and of high usefulness, across all projects in the sample.


In reporting, we will combine responses from both state-level and RCC-level clients to generate findings from the project participant surveys. Because the two survey forms are parallel, this aggregation will be a straightforward process.


We will report all findings from the participant surveys by Center, for the group of 16 RCCs, the group of 5 Content Centers, and for the entire group of 21 Centers.


For the Survey of Senior State Managers, the primary unit of analysis will be the state8. Some states and other jurisdictions will have more than one respondent for the senior managers survey, and in these cases multiple responses will be averaged together to create a single data point for each state and non-state jurisdiction.


Data from this survey will be reported as the number and percentage of states responding in specific ways to survey items. For example, we will report the percentage of states that judge the Comprehensive Centers’ assistance to be of high relevance and of high usefulness, as well as mean ratings of relevance and of usefulness across states. We will also report the percentage of states reporting that the assistance they received from the Comprehensive Centers has expanded their capacity to carry out various responsibilities related to NCLB.


However, in order to preserve the confidentiality of senior state managers, these tabulations will be reported in the aggregate only. The evaluation will not report which states responded in a specific way to each survey item, only the number and percentage that did so. (For example, a report finding might read, “Forty of 50 states reported that building or managing a statewide system of support was a major priority when requesting technical assistance from outside sources.”)

Because the RCCs serve a large number of jurisdictions other than states (12 out of a total of 63, or nearly 20 percent of all of the entities served by the RCCs), and because other jurisdictions may face different challenges in implementing NCLB and in expanding their capacity to serve low-performing districts and schools, we will report findings from the senior managers survey separately for (a) the 50 states and the District of Columbia and (b) other jurisdictions in most cases.


Findings from the Survey of Senior State Managers will be reported at the level of the Comprehensive Center system only, because of the small size of the sample used in reporting, the need to maintain respondent confidentiality, and because senior managers will be providing feedback on all of the assistance received by their state from the Comprehensive Centers as a system (both the RCC that serves their state and any Content Centers with which they have worked).


Findings from client surveys (along with expert panel ratings) will be reported for each of three program years: 2006-07, 2007-08 and 2008-09.

Research Questions


Exhibit 5 provides a mapping that shows how survey items will be used to address each of the research questions addressed through the participant surveys.


Exhibit 5

Research Questions, Survey Items, and Unit of Analysis

Research Question

Survey of State-Level Project Participants

Survey of RCC-Level Project Participants

Survey of Senior State Managers

Relevance and Usefulness

What is the relevance and usefulness of Center projects, and are there differences across Centers?

Items 4-5

(participants, responses apply to a single project)

Items 5-6

(participants, responses apply to a single project)

Items 8-9

(states, responses apply to the CC system)

Other Sources of Assistance

To what extent have State Education Authorities relied on other sources of technical assistance besides the Centers? Which other sources? How does the usefulness of Center projects compare with the usefulness of projects from other sources?



Items 2-4

(states)

Client Priorities

How do Center clients (SEAs or Regional Centers) define their needs and priorities?

Item 7

(participants)

Item 8

(participants)

Item 1

(states)

Expanding Capacity

To what extent have Center projects expanded SEA or Regional Center capacity to address underlying needs and priorities and meet the goals of NCLB?

Item 8-9

(participants, responses apply to a single project)

Item 9-10

(participants, responses apply to a single project)

Items 10-11

(states, responses apply to the CC system)

Program Improvement

Has the performance of Centers in addressing underlying needs and priorities changed over time?

Items 7-9

(participants)

Items 8-10

(participants)

Items 10-11

(states)

Has the relevance or usefulness of each Center’s projects changed over time?

Items 4-5

(participants)

Items 5-6

(participants)

Items 8-9

(states)



Ratings of Relevance and Usefulness


Data on relevance and usefulness collected through the project participant surveys will be reported in two metrics – the percentage of participants rating sampled projects as “high” relevance and “high” usefulness and mean relevance and mean usefulness ratings across projects.


Items on the relevance and usefulness of Comprehensive Center assistance appear on both the project participant and senior manager surveys. The definitions and indicators of relevance and usefulness developed for the evaluation, in consultation with IES, include three dimensions under relevance (addressed key priorities, applied to local contexts, and actionable) and three dimensions under usefulness (ease of use, new action, and capacity for ongoing improvement). Exhibit 6 shows which survey items are intended to assess each of these dimensions. There are 8 survey items designed to measure relevance (items 4a-4h on the project participant surveys in Appendix A and Appendix B), and 11 survey items designed to measure usefulness (items 5a-5k on the project participant surveys in Appendix A and Appendix B).


Survey items on relevance and usefulness ask respondents to rate Center projects on a scale of 1 to 5, where “1” means the statement is true “to a very low degree” and “5” means the statement is true “to a very high degree.”


To combine responses to items on relevance and items on usefulness into a single rating from each respondent, we will take a simple average of scores (out of 5) on each of the three dimensions under relevance and each of the three dimensions under usefulness. In computing average dimension-level ratings of relevance and usefulness for each respondent, we will weight each individual item equally. Then, we will take an average of the three dimension-level ratings to arrive at an overall rating of relevance and an overall rating of usefulness for each respondent, where each of the three dimension-level ratings contributes equally to the overall rating.


Points 4 and 5 on the scale used in the items designed to measure relevance and usefulness are labeled as “ high” and “very high,” to match the language of the GPRA indicator. Participants with a mean rating of 4 or greater will be deemed to have judged projects in which they have participated to be of “high relevance” or of “high usefulness” for the purposes of GPRA reporting. We will also report mean ratings of relevance and usefulness across participants.


These percentages and mean ratings will be weighted so that each respondent contributes equally to project-level ratings, each project contributes equally to respective Center-level ratings and each Center contributes equally to system-level ratings.


We will compute ratings of relevance and usefulness from the senior state managers’ survey in the same way: we will take an average of ratings on items under each dimension of relevance and of usefulness, so that each item contributes the same amount to each dimension-level rating. Then, we will take an average of the three dimension-level scores to compute an overall mean score of relevance and of usefulness for each respondent. We will then average together responses within a state or a jurisdiction to generate a state- or jurisdiction-level rating, where appropriate. We will report on the number of states (plus the District of Columbia) and the number of other jurisdictions that have judged Center assistance to be of “high relevance” or of “high usefulness” as defined above and mean ratings for both indicators as well.


The response “Not able to judge,” coded “99” on the survey instruments, will be treated as missing data when computing ratings of relevance and usefulness. That is, responses of “not able to judge” will be set aside in computing average ratings of relevance or usefulness across survey items and across respondents. Where the proportion of respondents reporting that they are not able to judge the relevance and usefulness of Center projects is high (for example, greater than 5 percent of all responses), we will report that frequency, in addition to other reporting on these survey items.


We will report results on relevance and usefulness from project participants and senior managers separately, discussing any differences in ratings in light of the different roles and perspectives of these two groups of respondents.



Exhibit 6

Dimensions of Relevance and Usefulness, with Associated Survey Items



Relevance

Addressed key priorities

  1. Addressed a need or problem that my organization faces

  1. Addressed an important priority of my organization

  1. Addressed a challenge that my organization faces related to the implementation of NCLB

Applied to your context

  1. Provided information, advice, and/or resources that could be directly applied to my organization’s work

  1. Addressed our particular state context

  1. Addressed my organization’s specific challenges (e.g., policy environment, leadership capacity, budget pressures, local politics)

Actionable

  1. Provided information, advice, and/or resources that could be used to guide decisions about policies, programs, and practices

  1. Highlighted the implications of research findings (or information about best practice) for policies, programs, or practices

Usefulness

Ease of use

  1. Provided resources that were easy to understand and easy to use

  1. Employed an appropriate format (e.g., a work group, a conference, individual consultation, written products)

  1. Provided adequate opportunity to learn from colleagues in other states

  1. Included adequate follow-up to support the use of new information and resources

  1. Were timely

New action at the organizational level

  1. Helped my organization solve a problem

  1. Helped my organization maintain or change a policy or practice

  1. Helped my organization take the next step in a longer-term improvement effort

Capacity for ongoing improvement

  1. Provided my organization with information or resources that we will use again

  1. Helped my organization develop a shared expertise or knowledge-base

  1. Helped individuals in my organization to develop skills that they will use again




Other Sources of Assistance, Client Priorities, and Expanding Capacity


To address research questions on other sources of assistance, client priorities, and expanding capacity, we will report basic frequencies (percent of participants or number of states) from the items shown in Exhibit 5.


Like the ratings of relevance and usefulness, all basic frequencies from the project participant survey will be weighted, so that that each participant contributes equally toward the rating for each project, each project contributes equally to Center-level ratings and each Center contributes equally to system-level ratings.

As with ratings of relevance and usefulness, we will report basic frequencies from project participants and senior managers separately, discussing any differences in ratings in light of the different roles and perspectives of these two groups.


The responses “Not applicable,” “Does not apply,” “Don’t know,” and “Too early to tell,” coded as “88” and “99” on the survey instruments, will be treated as missing data when computing basic survey frequencies. That is, these respondents will be removed from the sample when computing the percentage of participants responding positively to each survey item. Where the proportion of respondents reporting that an item is not applicable, that they don’t know, or that it is too early to tell whether a Center project has expanded state capacity is high (for example, more than 5 percent of all responses), we will report that frequency, in addition to other reporting on these survey items.


Extent to Which Each Center Has Met the Objectives in Its Own Plan


The legislative mandate for this evaluation requires an assessment of the extent to which each Center has met the objectives of its respective plans. Through data collected in both the site visits to each of the 21 Comprehensive Centers and the evaluation team’s systematic review of each of the Center’s annual work plans, we will be able to describe each center’s objectives. Through information collected through the site visit interview protocols and a review of each Center’s Project Inventory Forms, we will assess the extent to which the nature and content of Center work has met Center objectives.


Program Improvement


To address research questions on program improvement, we will compare survey responses from project participants and from senior state managers over time.


We will repeat all of the analyses described here with each year of survey data, comparing average ratings of relevance and usefulness and survey frequencies in year 2 with frequencies in year 1, and frequencies in year 3 with frequencies in years 2 and 1. Where it is possible to report survey data by Center, we will report over-time comparisons at the Center level as well as the program level.

Analysis of Open-Ended Items


All three of the surveys planned for the evaluation include an open-ended item that asks respondents how Comprehensive Center services could be made more relevant and more useful in the future.


During the initial review of narrative responses to open-ended items, we will remove all identifying information from each narrative response, including names of states, state agencies, Comprehensive Centers, individuals, and other information that could potentially be used to identify respondents or the states that respondents represent, such as unique or unusual job titles, program names, or other details about local context. Identifying information will be replaced with the appropriate generic term(s), such as “[state]” or “[Comprehensive Center],” set off in brackets so that the nature of the substitution by the research team is clear.


We will compile the narrative responses to these open-ended items, edited to removed identifying information, in an Excel spreadsheet or a Word table. We will organize responses under broad topic headings, to facilitate review of responses. We will provide all of the narrative responses to IES to share with the Comprehensive Centers program office to inform program improvement efforts, with all information that could reveal the identity of the respondent removed consistent with the pledge of respondent confidentiality outlined above.



A.17 Display of Expiration Date for OMB Approval

Institute of Education Sciences is not requesting a waiver for the display of the OMB approval number and expiration date on the data collection instruments. All data collection instruments will display the expiration date for OMB approval.


A.18 Exceptions to Certification Statement

This submission does not require an exception to the Certificate for Paperwork Reduction Act (5 CFR 1320.9)

1 The evaluation’s first OMB clearance request specified a sample of 6-10 projects per Center, for a total of 168 projects across the 21 Centers. After consulting with the evaluation’s Technical Work Group (TWG) and with IES staff, we have reduced the size of the project sample to 127, or 4-8 projects per center, in order to ensure that we will be able to secure an adequate number of highly-qualified expert panel members to rate the technical quality of all 127 projects.

2 See Appendix D for the rubrics and rating sheets to be used for quality ratings.

3 As defined in the first OMB clearance request submitted for this evaluation, a Comprehensive Center project is a set of activities and deliverables having a common intended outcome and, usually, addressing a single topic. Each Center is currently working with the evaluation team to itemize the projects that constitute its program of work, under procedures previously approved by OMB.

4 The number of responses assumes that we will administer surveys to a sample of participants in each of the 127 projects. Based on interviews with Comprehensive Center directors, participant lists collected during the piloting of survey instruments, and counts of project participants collected during site visits to the Centers, we estimate that the projects likely to be included in our sample will have less than 100 participants, with a median of 10-20 participants. Following sampling procedures described in Part B, and assuming an 85 percent response rate, we estimate 1,584 completed responses each year.

5 The number of responses assumes that all RCC staff have participated in at least one Comprehensive Center project. According to our review of Comprehensive Center web sites, there are about 240 staff members providing technical assistance to states at the RCCs. Assuming that all 240 of these staff members will be included in the sampling frame each year, and assuming an 85 percent response rate, we estimate 204 completed responses from RCC staff each year (85 percent of 240 participants).

6 The number of responses represents an estimated maximum of 2 respondents for each of 63 jurisdictions (50 states and 13 other entities) served by the Comprehensive Centers.

7 These data collection instruments have already been cleared by OMB (OMB No. 1850-0823).

8 The Comprehensive Centers serve each of the 50 states, the District of Columbia (DC), and 12 other jurisdictions (commonwealths, territories, and freely associated states). These other jurisdictions include Puerto Rico, the Virgin Islands, American Samoa, Chuuk, Commonwealth of Northern Marianas, Federated States of Micronesia, Guam, Kosrae, Pohnpei, Republic of the Marshall Islands, Republic of Palau, and Yap. The RCCs have negotiated separate scopes of work with senior managers in each of these 63 entities (50 states, DC, and 12 other jurisdictions).



File Typeapplication/msword
File TitleAbt Double-Sided Body Template
AuthorAbt Associates Inc
Last Modified Bypaul.strasberg
File Modified2007-12-06
File Created2007-12-06

© 2024 OMB.report | Privacy Policy