ED Response to OMB Questions 2

Comp Center Response to OMB Questions 04-11-2007 (2).doc

National Evaluation of the Comprehensive Technical Assistance Centers

ED Response to OMB Questions 2

OMB: 1850-0823

Document [doc]
Download: doc | pdf

Responses submitted by Paul Strasberg, IES, to Questions Received from Rachel Potter on April 4, 2007 regarding the Comprehensive Technical Assistance Center Information Collection Package

Question 1: It looks like the second phase of the evaluation is a client survey.  To me, the most important part of the evaluation has to do with usefulness to States, the main clients I believe, so I want to make sure that part is really thought out.  Do you know if States, or clients in general, will be surveyed about the same projects chosen for the first portion of the evaluation.  If so, I hope States will be able to address other issues as well.  If not, shouldn’t they have some input into that process?   In general, how do the two (or more) phases of the evaluation relate to each other? 

Response 1: While there are two submissions to OMB for this evaluation, we do not see this evaluation as occurring in multiple phases.

Within the first year of data collection, we anticipate collecting data using the instruments included in this submission [site visit protocols, project inventory forms], conducting expert panels to rate a sample of CTAC work for its quality, as well as conducting client surveys to rate relevance and usefulness. [As noted in Part B of the current submission, request for approval of the Client Survey Form will be included in a second OMB submission that we will submit in July 2007.]

Within the second and third years of the evaluation, we anticipate using the same data collection techniques, with the exception of the site visits which will be conducted only once.

These methods, in addition to other data collection techniques [e.g., review of Center management plans submitted to ED, etc.] will, collectively, allow us to meet the goals of the evaluation.

We intend to survey SEA and intermediate education agency staff about the same projects that will be rated by expert panelists for quality. These survey protocols will be designed to obtain client responses regarding the relevance and usefulness of CTAC technical assistance. We will tailor the survey protocols to ellicit opinions on relevance and usefulness of the same set of projects sampled for quality.

In addition to surveying state and intermediate service agency staff about sampled projects, we will also survey senior state staff about other issues as well – among them the extent to which assistance from the Centers has met state needs or has enhanced state capacity. We are currently developing these protocols, and will provide these in our July 2007 submission.

Question 2: It seems like a lot of the evaluation is focused on products, or essentially the outputs, rather than on outcomes (the impact of the products or services).  For example, it looks like the expert panels will review a lot of materials (meeting agendas, briefing books) rather than looking for evidence of how products, technical assistance, etc. is used. a) is there anything the expert review panels can review to assess impact of the materials and TA? and b) how do the centers, themselves, assess whether their work is effective?
 
Response 2: This evaluation is designed both (1) to meet the congressional mandate for an independent evaluation of the CTACs as provided by Section 204 of the Education Technical Assistance Act;1 and (2) to provide information to the Program Office that will be helpful toward supporting program improvement. To determine the impact of TA, in a rigorous manner, on a range of client-level outcomes would require random assignment of key policy variables [e.g., appropriations, methods of TA, etc.], and we do not see such a study as being feasible at this time.

The primary goals of our study are:
 
1) to document and describe the operations of the Centers – what  are the extent and array of services provided; what are the goals of the centers and how do they match up with issues of key importance to their clients, etc.); and
 
2) to rate CTAC work, for quality, relevance and usefulness. 
 
Each of the Centers must conduct an evaluation of its own work, as defined by Section 203 of the Education Technical Assistance Act. This center-conducted evaluation is designed to describe and summarize Center activities on an annual basis, and may be used to support program improvement.

As part of our document collection while on site, we expect to gather data about how these evaluations are conducted and used.

Question 3: Do the centers provide much in the way of direct training?  If so, will that be observed and evaluated?  If not, is there an assessment of the most effective methods of delivering the information, i.e., products vs. services, written materials vs. teleconferences, etc?

Response 3: We believe that the Centers utilize a wide range of methods to provide technical assistance. This may include direct training of client staff. We anticipate that the client surveys [to be included in our July 2007 submission to OMB] will ask questions of clients who have been the targets of CTAC technical assistance, which may include direct training. More generally, we are designing our survey instruments to be flexible enough to be used by Center clients, regardless of the particular mode(s) of TA delivery they may have participated in.

We would expect that responses by clients would be a very good proxy for the types of evaluative information that could conceptually be gained by actually having observed Center TA being provided. On the other hand, it is not feasible for us to actually observe or evaluate the provision of TA by the Centers in real time as it is delivered, given that we will be reviewing Center Projects retrospectively [i.e., we will be reviewing and rating Center Projects from the 2006-07 program year after that program year is complete]. We will be developing a sample frame of Center clients who have already participated in sampled Projects for the purpose of rating relevance and usefulness.

Question 4: How is “high quality” defined?


Response 4: The foundation of the our quality rating system will be the expertise in the relevant scientific evidence brought to bear on each project by individuals who will serve as panel members to review and rate each Project. The expert review panel process, currently under development, will have the following major components:


  1. Identification, recruitment and selection of experts. We will identify, recruit and select experts who are well suited to bring a deep and thorough knowledge of the scientific evidence in each field addressed by sampled Projects. Experts will be selected based on (a) their expertise in the scientific evidence in the content area of the Project under review; (b) their publication record in relevant research; (c) their knowledge of relevant practice and policy, as appropriate; and (d) their ability to conduct an objective and independent assessment of Center materials with no actual or appearance of a conflict of interest.

  2. Scoring rubric for quality. We are developing a rubric to define key dimensions of quality that would be suitable to the range of Center Projects we will review. Each project will be rated along several dimensions. Each dimension will have possible scores ranging from 1 through 5 scale where 1, 2, 3, 4 and 5 scores are clearly defined, and such that a rating of 4 or 5 on any particular dimension will meet the standards for high quality. Each project will be rated independently by a panel of three reviewers.

  3. Reviewer training. We will conduct an intensive, in-person training in which reviewers will be trained to use the quality rubrics in a manner that ensures reliability and validity. We will include ample time in the training to ensure a high level of inter-rater reliability.

  4. Provide Project materials to expert panelists. For each selected Project, from each Center, we will request complete and full documentation, including a project cover sheet. We will provide all relevant materials to panelists for their review and rating.

  5. Defining high quality in our reports. Each project will receive a quality rating that reflects its average score across each of the quality dimensions. Consistent with our GPRA reporting requirement for each Center, we will report (a) the percentage of reviewed Projects that are rated at a 4 or higher as being of “high quality” and (b) the average quality score for each Center.

1 The Educational Technical Assistance Act provides the following language defining the congressional interest in an independent evaluation by NCEE for this evaluation.

SEC. 204. EVALUATIONS.

The Secretary shall provide for ongoing independent evaluations by the National Center for Education Evaluation and Regional Assistance of the comprehensive centers receiving assistance under this title, the results of which shall be transmitted to the appropriate congressional committees and the Director of the Institute of Education Sciences. Such evaluations shall include an analysis of the services provided under this title, the extent to which each of the comprehensive centers meets the objectives of its respective

plan, and whether such services meet the educational needs of State educational agencies, local educational agencies, and schools in the region.

File Typeapplication/msword
File TitleResponses submitted by Paul Strasberg, IES, to Questions Received from Rachel Potter on April 4, 2007 regarding the Comprehens
Authorpaul.strasberg
Last Modified ByRachel Potter
File Modified2007-04-11
File Created2007-04-11

© 2024 OMB.report | Privacy Policy