Responses to OMB comments

1875-NEW 4869 Memo on responses to OMB comments 12-4.doc

Study of Strategies For Improving the Quality of Local Grantee Program Evaluation

Responses to OMB comments

OMB: 1875-0270

Document [doc]
Download: doc | pdf





Date

December 4, 2012



To

Katrina Ingalls



From

Noah Mann



Subject

Responses to OMB Comments





Below we present our responses to the OMB comments on 201208-1875-002 Study of Strategies for Improving the Quality of Local Grantee Program Evaluation.

  1. Seemingly absent from this ICR is the statement/idea that the “findings” and impressions from the interviews are based on a purposive sample, and so cannot be generalized to a broader population. Likewise, the results of the purposive interviews can be combined with information from the document review and other evidence to speak to the various research questions and subquestions but---being based on a purposive sample---cannot and should not be represented as definitively answering any of the questions. It would be good for a statement about the limitations of a purposive sample to be placed at the end of A.1 (introducing the research questions), in A.2.2 (introducing the interview component), in B.1, and in B.2.2.



Response:

We have updated Sections A.1, A.2.2., B.1, and B.2.2 with the relevant level of detail for each section, based on the following complete statement about the limitations of the purposive sample:



The use of a purposive sample to select grantees for interviews will ensure balance across the two programs and in the key characteristics of grantees (such as type of evaluation and performance measures and timing of grant).



However, the resulting sample is not necessarily representative of all grantees. Therefore, the findings generated from the interviews are not generalizable to all CSP and VPSC grantees. However, the interview selection process will ensure that the sample includes a diverse set of grantees from both programs. Further, similar topics will be addressed across interviews with different types of respondents, including grantee project directors, grantee evaluators, technical assistance providers, grantee monitors, and federal program office staff. The findings from these interviews will be triangulated to provide a complete account of grantees’ experiences with performance reporting and evaluation, as well as the technical assistance provided in these areas.

  1. The Supporting Statement is vague in describing how ED will decide which 15-20 grantees to select for the phone interviews, saying only (most thoroughly in Section B.1) that they will be “chosen based on the distribution of grantee characteristics with respect to performance reporting and evaluation strengths and challenges.” Clearly, the broad umbrella of “purposive” sampling covers a lot of ground---and, as long as the analysis is clearly conveyed as being a purposive sample, and any reports spell out and justify the thought processes in making the purposive selections, there’s nothing inherently wrong. But, for purposes of the Supporting Statement, it would be good to at least sketch out any rougher idea of attributes ED might look for in picking a particular grantee---and, at the very least, to state whether it plans to “stratify” or balance interviews between the CSP SEA and VCSP grantee lists).



Response:

We have updated Section B.1 with the following statement about the sample selection:



Grantees will be purposively selected for interviews by stratifying on:

  1. Program (CSP and VPSC);

  2. Type of evaluation and performance measures (reporting on inputs or outputs only, descriptive design, one group pre-post design, and quasi-experimental or experimental design); and

  3. Timing of grant (current grant versus completed grant).

Grantees with Project Directors who have been involved since the beginning will also be targeted. This is because these Project Directors are more likely to be familiar with the application process and any early technical assistance and resulting changes that occurred. This selection process will result in a diverse sample of grantees.



  1. Given that grantee program directors have already apparently been contacted to provide a core dump of documents for the first part of the document, it might be desirable to impose some limits---in both the introductory letter (Appendix F) and in the introduction of the interview protocol---to help the grantee/recipient better understand the magnitude of what’s being asked. To wit, though the supporting statement indicates that the expected interview time is 30-60 minutes, that information doesn’t appear up front in either the letter or the script. [The OMB control number paragraph in the letter does suggest an “average 1 hour” time commitment, all things considered.]  Would it be useful to say, in both places, that the interview will take no longer than one hour (cap the time commitment), and then to think about interviewer instructions and training to highlight parts of the interview protocol that are absolutely essential?



Response:

We have not had to contact individual grantees to collect documentation. All grantee documents were provided to us by the CSP and VPSC program offices.



We have added a hard cap of 1 hour to both the introductory letter and the introduction of the interview protocol.



Additional OMB Comments:

4) Will the study attempt to determine under what conditions a grantee would choose to do an impact evaluation versus a non-impact evaluation?  It would be interesting to see if there are ways ED can encourage more impact evaluations.


Response:

This study will address how, if at all, ED provided technical assistance related to grantee ability to design and conduct impact and non-impact evaluations. However, it is outside the scope of this study to determine the conditions under which grantees choose to conduct impact evaluations versus non-impact evaluations. For the grantees selected for interviews, the study findings will discuss how and why the grantees selected their particular data collection methods and evaluation designs.

5) Will the study attempt to determine to what extent grantees revisit and revise their performance measures and why?

Response:

Yes, under RQ2d the study will assess the extent to which grantees revisit and revise their performance measures and why they do so. This question will be answered using findings from the grantee and technical assistance provider interviews.


6) How will this study be used by program offices?


Response:

Program offices will use the study to assess the usefulness of technical assistance designed to improve local grantee evaluations and performance reporting. Program offices will also gain a greater understanding of how performance measures and local grantee evaluations are used for program improvement and federal policymaking. The results will inform program office decisions on the design of future evaluation related technical assistance.






4



File Typeapplication/msword
AuthorNoah Mann
Last Modified Bykatrina.ingalls
File Modified2012-12-07
File Created2012-12-07

© 2024 OMB.report | Privacy Policy