ED-OMB QandA 12-11-07

Response to OMB Questions 2nd CC from 11-30-2007.doc

National Evaluation of the Comprehensive Technical Assistance Centers

ED-OMB QandA 12-11-07

OMB: 1850-0823

Document [doc]
Download: doc | pdf

NCEE Responses to OMB Questions on the 2nd Submission of the

Evaluation of the Comprehensive Technical Assistance Centers

Received November 30, 2007

 

  1. Would you please explain how you selected a three year/annual timeframe for this study?


Response: The evaluation reports on the Centers over a three year period of their performance from July 1, 2006 – June 30, 2009, corresponding to the 2nd, 3rd and 4th years of the Center’s five year cooperative agreements with ED. Planning began on the evaluation at the time the Centers were awarded in 2005, and the evaluation was designed to cover as much of the performance of these Centers as possible such that the final report could be released prior to the end of the Centers’ cooperative agreements in mid-2010. Given that the Centers organize their work in program years from July 1 to June 30, a decision was made to organize the evaluation in an annual, parallel manner.


  1. Confidentiality: 

    1. Part A-10: OMB and IES (Amy Feldman) developed an ESRA confidentiality pledge for use in IES studies.  Will you please modify the confidentiality aspects of this ICR to be in line with the ESRA confidentiality pledge?


Response: Yes. We have added the language received from Amy Feldman to section A-10 of Part A, to the covers of each of the three surveys, and to the letter to be sent to all study participants.


    1. In part A-10 and A-16, you describe your efforts to roll survey data up into aggregates that will preserve the confidentiality of respondents.  Would you please describe your disclosure avoidance plans with regard to the narrative responses?  How will you handle disclosure avoidance for the Survey of Senior State Managers given that the primary unit of analysis will be the state?


Response: We have added text to sections A-10 and A-16 to address disclosure avoidance plans with regard to narrative responses.


As noted at the bottom of page 13, Part A, findings from the Survey of Senior State Managers will be reported at the level of the Comprehensive Center system only (rather than center by center), because of the small size of the sample used in reporting, and the need to maintain respondent confidentiality.


Data from this survey will be reported in the aggregate only, as the number and percentage of states responding in specific ways to survey items (for example, “Forty of 50 states reported that building or managing a statewide system of support was a majority priority when requesting technical assistance from outside sources). We will not report which states responded in a specific way to each survey item, only the number and percentage that did so.


We have added text to page 13, Part A to make this point clear.


  1. Naming of entities on correspondence:


    1. In Appendix F, IES is a named entity, but on the lead-in language of the survey themselves IES is not mentioned.  We are hoping to prevent any confusion by being as consistent as possible with the naming of entities related to this evaluation.


Response: We have added a reference to IES to the lead-in language on each of the surveys.


    1. Similarly, is it necessary to name each of the 3 contractors in Appendix F and the surveys?  To limit confusion, if survey respondents will be dealing with only one of the 3 contractors, would you consider editing the letter to remove the other contractors’ names?


Response: State-level survey respondents and senior state managers will be dealing only with Policy Studies Associates, so we have deleted references to Branch Associates and DIR on the Senior State Managers survey and the Project Participant Survey for State-Level staff.


RCC staff are familiar with Branch Associates and DIR from their interactions with these firms on other parts of the evaluation (site visits, collection of project inventories, and collection of project materials for expert panel review). If RCC staff receive a survey saying that the national evaluation of the Comprehensive Centers is being conducted for ED by Policy Studies Associates, they might be confused, or think that they had received the survey in connection with another study. On the RCC survey, we recommend leaving on the names of all three contractors.



    1. Who will sign the letter in Appendix F?


Response: Paul J. Strasberg, NCEE Project Officer, will sign the letter.


  1. Sampling


    1. How did IES arrive at the sample sizes of 12 participants and 48%, as described in Part B-1?


Response: Data gathered from the Comprehensive Centers predicts a very wide range in the number of participants in Center projects. In light of this variation, we have proposed an approach that provides face validity of fairly representing the underlying population. Further, our approach seeks to be mindful of respondent burden for projects with very large numbers of participants.


We felt that for projects with 12 or fewer participants, it was best to include all participants in our sample. We felt that it would not be prudent to exclude project participants when the population is so small. For projects with between 13 and 25 participants, we felt that having a sample size of 12 participants would be sufficient without being unnecessarily burdensome. For those projects with 26 to 100 potential respondents, we felt that having a sample size with the same percentage of potential respondents as those with 25 participants – (12 / 25 = 48 percent) – would strike an balance between burden and appropriate sample size for these projects. For each sampled project, we have assumed an 85 percent response rate among sampled participants.


    1. How will sampling be done to select the projects to be evaluated?


Response: We will sample projects in each of three years of this evaluation using methods approved by OMB in our first submission under this evaluation, [approved April 17, 2007 as OMB Control # 1850-0823]. Given limited resources, it is not possible for the evaluation team to submit each Center project to an independent review panel to rate quality; therefore, we have adopted a sampling strategy that will provide a broadly representative set of projects for expert panel review from each Center. We will draw samples of between four and eight projects from each Center, with the sample size being roughly proportionate to the number of projects undertaken by each Center and their annual appropriation.


To select projects each year from each Center’s portfolio, we will:


  • Request that each Center complete a Project Inventory Form [approved by OMB in our 1st submission] after each program year. The Project Inventory Form provides a complete list of Center projects1 carried out in each program year. We provided technical assistance, as requested, to each Center to ensure that their work was accurately reflected in their Project Inventory Form for the 2006-07 program year, and expect to do so again in the two subsequent program years.


Each year:


  • One-third of the projects in our sample for each Center (1-2 projects) are self-nominated by Center staff as best representing the work they have undertaken each year. Center staff indicate which projects they are nominating for review on the Project Inventory Form.


  • The Project Inventory Form includes Centers’ own estimates of the level of effort (major, moderate, minor) involved in each project within each program year. We are selecting two-thirds of our sample from non-self-nominated projects for each Center by first exhausting all major projects, and hten, if necessary, selecting randomly among moderate projects.




  1. Survey question 7 on the participant surveys is dense.  Column A and Column B appear to be unrelated cognitive tasks; could they be broken into two questions? 


Response: We agree, and have broken question 7 into two on both the state-level participant and RCC participant surveys. We have revised item numbers in the matrices in Section A-16 accordingly.


  1. Survey question 10 on the participant surveys is inconsistent across the two surveys.  One is a “select all” item (RCC Staff), and the other is a “select one” item (State Staff).  Is this intentional, and if so, why?


Response: As these items were originally drafted, this difference was intentional. Many RCC staff divide their time between the RCC and other technical assistance projects operated by the organization running the RCC; we initially asked RCC staff to “select all” with the goal of identifying all types of TA projects on which respondents may work. However, State-level staff typically do not divide their time between agencies. In their case, we used the “select one” in the question.


Upon further review, however, we conclude that, for the purposes of the evaluation, this question on the RCC staff survey is only needed to confirm that the respondent worked for an RCC during the period of the project addressed in the survey. We have rewritten this item and moved it to the beginning of the RCC staff survey (Q2), and we have revised item numbers in the matrices in Section A-16 accordingly.


Question 10 (now Question 11) on the state-level staff survey remains unchanged.


  1. Survey question 11 on the State Staff survey asks the respondent to select “the” response, but the parenthetical indicates that he or she should “select all that apply.”  Please clarify this question.


Response: We have changed the instructions to read “Select one.”


  1. What is the utility of participant survey question 12 (% of time spent)?  Should the question contain a timeframe (today, last week, next week, last month, last year, etc.)?


Response: We agree that the question will be easier for some respondents if we specify a time frame. We have added the phrase, “From July 2006 to June 2007,” to the beginning of the item stem.


Data collected from this item will permit us to describe the extent to which respondents are involved in NCLB issues—and therefore in need of Comprehensive Center technical assistance that is targeted on NCLB. The data will also allow us to compare responses from state-level participants who spend all or almost all of their time on NCLB implementation (i.e., the Comprehensive Centers’ target audience) with responses from state-level participants who spend less time on NCLB implementation.



Note on additional revision to surveys:


We have added a row to Q2 on the senior state managers survey, on the Doing What Works web site (Q2b). Because the Doing What Works web site was launched in the fall of 2007, and because the period covered by Q2 in the first year of survey administration will be July 2006 to June 2007, we will not include this item on the survey in the first year of administration. However, it will be included on the surveys in the second and third years of survey administration. The version of the senior state managers survey included with this response includes the revised Q2.


1 For the purposes of the evaluation, projects are defined as a group of closely related activities and/or deliverables designed to achieve a specific outcome for a specific audience.

File Typeapplication/msword
File TitlePaul and Katrina:
Authorpaul.strasberg
Last Modified Bypaul.strasberg
File Modified2007-12-10
File Created2007-12-10

© 2024 OMB.report | Privacy Policy