Att_CTAC Survey OMB Part B 9-19-07

Att_CTAC Survey OMB Part B 9-19-07.doc

National Evaluation of the Comprehensive Technical Assistance Centers

OMB: 1850-0823

Document [doc]
Download: doc | pdf

National Evaluation of the Comprehensive Technical Assistance Centers



Statement for Paperwork Reduction Act Submission


Part B: Collection of Information Employing Statistical Methods



Contract ED-04-CO-0028







September 189, 2007





Prepared for

Institute of Education Sciences

U.S. Department of Education


Prepared by

Branch Associates, Inc.

Policy Studies Associates, Inc.

Decision Information Resources, Inc.


Contents



Part B: Collection of Information Employing Statistical Methods


This is the second of two clearance requests submitted to OMB for the National Evaluation of the Comprehensive Technical Assistance Centers (“Centers”). OMB has approved a procedure for sampling 127 projects from an inventory of all projects undertaken by the Comprehensive Centers in the preceding program year, which is described in the evaluation’s first clearance request (OMB No. 1850-0823, dated January 22, 2007).1 This approach to selecting a sample of projects will be repeated for each of three program years: 2006-07, 2007-08, and 2008-09.


This second clearance request describes the evaluation’s proposed strategy for identifying and selecting:


  • Participants in Comprehensive Center projects (including both state-level staff, and, in the case of the Content Centers, RCC staff)

  • Senior state managers, who negotiate the Centers’ scope of work in each state, who supervise state-level staff participating in projects, but who may not participate in Comprehensive Center projects themselves


B.1 Respondent Universe and Sampling Methods

Participants in Comprehensive Center Projects


Assembling a Sampling Frame


Client surveys will be conducted among participants in the same set of 127 projects that are selected for expert panel review in each year (2007, 2008 and 2009) of the study. In this way, expert panel ratings of quality and client ratings of relevance and usefulness will be presented on the same set of Comprehensive Center projects.


To achieve this goal, the evaluation team will draw a sample of 127 projects in each of the three years, using methods approved by OMB, from inventories of all the projects undertaken by each Comprehensive Center during the previous program year (July 1-June 30). A sample of Comprehensive Center clients will be asked to rate each of these 127 projects for relevance and usefulness and to provide additional feedback on Comprehensive Center assistance.


To assemble a sample frame that includes the universe of participants in these 127 projects, we will ask the Comprehensive Centers to provide the evaluation team with participant lists (including name, affiliation, and contact information) as part of the materials they will be providing on the sampled projects for expert panel review. 2



For the purposes of this evaluation, state-level project participants who may be participating in either Regional Center or Content Center projects may include the following:


  • State education agency (SEA) employees


  • Employees of intermediate education agencies who provide assistance to schools on behalf of the SEA3


  • Local educators (school district administrators, principals, and teachers) who are serving on school support teams assembled and supervised by the SEA as part of a statewide system of school support for schools identified for improvement under NCLB, and


  • Local educators or employees of intermediate education agencies who are serving on a state-level work group or task force convened by the SEA.


In addition to state-level project participants, the project participant lists collected from the Content Centers will also include all RCC staff who have participated in each project. Some Content Center project lists may include no RCC staff, some may include both RCC and state-level staff, and some may be made up entirely of RCC staff, depending on the nature of the project. We will combine state-level and RCC staff in the sample frame for each Content Center project for the purposes of project-level sampling.


Size of the Respondent Universe


Based on a review of the 127 projects selected for the evaluation’s first year sample, and on a review of data collected during Comprehensive Center site visits by evaluation team staff members, we estimate that the projects in the evaluation sample have a maximum of 100 participants, with a median between 10-20 participants. Exhibit 8 displays our estimate that there are 3,950 participants across the 127 sampled projects. For purposes of sampling, we have defined three mutually exclusive project size categories: projects with one to 12 participants, projects with 13 to 25 participants and projects with 26 to 100 participants.


Exhibit 8

Number of Participants (both State-Level and RCC) in Respondent Universe and Survey Samples



Project Size Category (Number of Participants)


1-12

13-25

26-100

Total

Number of projects

29

30

68

127

projects

Mean number of participants per project

7

18

47

 


Size of respondent universe (number of projects * mean number of participants)

203

540

3,207

3,950

participants in universe

Mean number of participants sampled

7

12

23

 


Size of survey sample (number of projects * mean number of participants sampled)

203

360

1,540

2,103

participants in sample

Completed surveys (assuming an 85 percent response rate)

173

306

1,309

1,788

responses



Sampling Procedures and Size of the Survey Sample for Project Participants


Given limited evaluation resources and potential burden on respondents, it is not possible to include all project participants in the survey sample. Therefore, we propose here a sampling strategy that will yield a broadly representative pool of participants to rate Comprehensive Center projects for relevance and usefulness and to provide other feedback on the nature of the Comprehensive Centers’ work.


Balancing our concerns for burden and representativeness, we propose sampling rules based on the number of participants identified by Centers in each sampled project:


  • For projects with 12 or less participants, we will select all participants.

  • For projects with between 13 and 25 participants, we will select a simple random sample of 12 participants.

  • For projects with between 26 and 100 participants, we will select a simple random sample of 48 percent of participants.


We estimate (see Exhibit 8) a total respondent sample of participants for the project-level surveys of 1,788, assuming a response rate of 85 percent of sampled participants.


Our review of Comprehensive Center web sites suggests that there are about 240 staff members providing technical assistance to states at the Regional Comprehensive Centers. Assuming that all 240 of these staff members will be included in the sampling frame, and assuming an 85 percent response rate among RCC staff, we estimate that 204 of the 1,788 completed responses shown in Exhibit 8 will come from RCC staff. The remaining 1,584 responses will come from state-level staff.


Projects Sampled to Represent More Than One Comprehensive Center


The evaluation team will select a sample of projects independently for each Comprehensive Center, using the project inventory form that each Center has prepared and the sampling criteria developed for the evaluation. Where two or more Comprehensive Centers have collaborated on a project (e.g., a Content Center and a Regional Center, two Regional Centers, etc.), that project will appear on more than one inventory form. There is some probability that the same project will be selected for more than one Center.


For each project sampled, whether for one or more than one of the Centers, we will select one set of project participants. The cover of each survey will be tailored to reference all of the Comprehensive Centers collaborating on the project. Survey data, including ratings of relevance and usefulness, will then be used in reporting on all Comprehensive Centers for which the project has been sampled.




Survey of Senior State Managers


To assemble a sampling frame for this survey, the evaluation team has collected the names of the senior State Education Agency managers during site visits to the Comprehensive Centers. Data collection procedures and burden estimates associated with this information request have been described in our January 22nd submission and have already been reviewed and approved by OMB. The site visit protocol already approved by OMB asks Center directors to list their primary point(s) of contact in each state (question 9). Using this protocol, the evaluation team has collected an average of two names per state or other jurisdiction.


To finalize the survey sampling frame, we will contact the individuals named by the Centers and ensure that they are comfortable speaking for the overall needs and technical assistance resources of their agency. In some cases they may refer us to a more senior individual in the agency.


Given our desire to be representative of the views of all State Education Agencies and education agencies from the other jurisdictions served by the Comprehensive Centers, we will include senior managers from all states and other jurisdisctions in the survey sample. We will survey at least 1 and up to 5 respondents per state, for a total sample size of 126 (an average of 2 respondents for each of 63 jurisdictions served by the Comprehensive Centers).


B.2 Information Collection Procedures

We will begin survey administration by mailing a letter, signed by an ED official, explaining the evaluation and its importance to ED, and requesting a prompt response to the surveys. We will e-mail an individualized link to the web-based survey to each respondent, with a cover letter explaining the study and asking for their prompt response. A week before the survey is due, we will mail out a reminder postcard to all respondents, with instructions for requesting another link or paper survey, if needed. A week after the initial due date, we will send out another individualized link to all non-respondents, with instructions for requesting a paper survey. A week after the due date for this second mailing, we will mail paper copies of the survey to all non-respondents. At the same time, we will begin telephone follow-up. Each respondent will receive up to three phone calls, asking him or her to complete the survey and offering to send replacement links or paper forms.


B.3 Methods to Maximize Response Rates

To maximize response rates, we will create individualized links to the survey. A week before the survey is due, we will mail out a reminder postcard to all respondents, with instructions for requesting another link or paper survey, if needed. A week after the due date for the second mailing, we will mail paper copies of the survey to all non-respondents. At the same time, we will begin telephone follow-up. Each respondent will receive up to three telephone calls, asking him or her to complete the survey and offering to send replacement links or paper forms. We anticipate that these procedures will achieve response rates of at least 85 percent on all three surveys.


B.4 Test of Procedures

We piloted the three survey forms included with this clearance package with 9 state-level project participants, 8 RCC project participants, and 6 senior state managers. We sampled 11 Comprehensive Center projects and 6 states to include in the pilot. For each sampled project, we asked the Comprehensive Centers to provide us with a complete list of participants, in order to select one or two state-level or RCC respondents at random. For each sampled state, we asked the Comprehensive Center to provide us with the names of their main points of contact. The piloting of the surveys did not involve more than 9 individuals for each survey.


Based on the results of the pilot tests, we made some changes to improve the clarity and relevance of survey items on both the participant surveys and the senior manager surveys. For example, based on feedback collected during debriefing interviews with senior managers and project participants, we added new rows to the items on relevance and usefulness. We also simplified the skip patterns used in the participant surveys, streamlined the text included on the cover of the participant surveys, and strengthened the presentation of information on the Comprehensive Center projects that are the subject of the participant surveys. Pilot testing confirmed that our burden estimates for individual respondents (an average of 20 minutes per response) are accurate. Pilot testing did not lead to significant changes in the length or content of instruments.






B.5 Individuals Consulted on Statistical Aspects of Design

These data collection plans were developed by Branch Associates, Inc., Decision Information Resources, Inc. and Policy Studies Associates. The research team is led by Alvia Branch, Project Director. Other members of the evaluation team who worked on the design include: Brenda Turnbull (PSA), Kate Laguarda (PSA), Russell Jackson (DIR), Carol Pistorino (DIR), Cynthia Sipe (Branch Associates), and Barbara Fink (Branch Associates). Contact information for these individuals is provided below.


Paul J. Strasberg

Department of Education

(202) 219-3400


Alvia Branch

Branch Associates, Inc.

215-731-9980


Cynthia Sipe

Branch Associates, Inc.

215-731-9980


Barbara Fink

Branch Associates, Inc.

215-731-9980


Brenda Turnbull

Policy Studies Associates

(202) 939-5324


Kate Laguarda

Policy Studies Associates (202) 939-5321


Russell Jackson

Decision Information Resources, Inc.

(713) 650-1425


Carol Pistorino

Decision Information Resources, Inc.

(650) 473-1564


1 The evaluation’s first OMB clearance request specified a sample of 6-10 projects per Center, for a total of 168 projects across the 21 Centers. After consulting with the evaluation’s Technical Work Group (TWG) and with IES staff, we have reduced the size of the project sample to 127, or 4-8 projects per center.

2 The burden associated with the request for participant lists falls within the scope of the data collection plan and burden estimates described in the evaluation’s first OMB submission and approved by OMB. For each sampled project, Exhibit 2 of Part A of our first OMB submission anticipated that each Center is expected to spend 4 person hours assembling associated documents and other materials, including project participant lists for the purpose of the client surveys.

3 There are 620 intermediate education agencies located in 42 states. Intermediate education agencies are usually established by state statute, but their governance structures and funding sources vary from state to state. Depending on the state, they are known as Area Education Agencies (AEA), Boards of Cooperative Educational Services (BOCES), Cooperative Education Service Agencies (CESA), County Offices of Education (COE), Education Service Centers/Cooperatives (ESC), Education Service Districts (ESD), Regional Education Service Agency (RESA), or Regional Education Service Centers (RESC). Association of Educational Service Agencies, “Questions Asked About Educational Service Agencies,” downloaded from http://www.aesa.us/Q&ABro04.pdf on July 6, 2007.

File Typeapplication/msword
File TitleAbt Double-Sided Body Template
AuthorAbt Associates Inc
Last Modified ByDoED
File Modified2007-09-24
File Created2007-09-24

© 2024 OMB.report | Privacy Policy