Att_1875- NEW4647 Supporting Statement Part B 20110926

Att_1875- NEW4647 Supporting Statement Part B 20110926.doc

Program Performance Data Audits Project

OMB: 1875-0264

Document [doc]
Download: doc | pdf

.








Statement for Paperwork Reduction Act Submission


Part B: Collection of Information Using Statistical Methods


Program Performance Data Audits Project


Contract ED-04-CO-0049


Office of Planning, Evaluation, and

Policy Development



Table of Contents



Tables



Introduction


This clearance request is submitted to OMB for the Office of Planning, Evaluation, and Policy Development’s (OPEPD’s) audit of grant program procedures for collecting, analyzing, and reporting performance and evaluation data. This request is necessary because OPEPD within the U.S. Department of Education (ED) has contracted with Decision Information Resources, Inc. (DIR) and Mathematica Policy Research, Inc. (Mathematica) to assess the procedures for collecting and reporting program performance and evaluation data for eleven ED grant programs.


These audits and assessments will provide ED with insight into (1) whether the programs’ performance data are of high quality and the methods used to aggregate and report those data are sound; and (2) whether the local evaluations conducted by grantees (or their local evaluators) are of high quality and yield information that can be used to improve education programs. In addition, these audits will examine the strengths, weaknesses, and areas of improvement for the performance and evaluation data reporting systems used by budget and program offices and grantees.


This OMB submission requests approval for the use of interview protocols for collecting information from program grantees and their local evaluators and program officer contractors to address the major research questions associated with this project. All other data used to address the audit’s research questions will come from sources (e.g., grantee annual performance reports, local evaluation reports, guidance documentation provided by the program office, and additional interview data from federal government staff) that will not require OMB approval.


Federal legislation authorizing (see Appendix A) activities to improve the quality of elementary and secondary education programs recognizes the value of collecting and reporting high-quality performance measurement data and evaluation findings to help inform decisions. The U.S. Government Accountability Office (GAO) recognizes that both program performance measures and program evaluation play key roles in the Program Assessment Rating Tool (PART), which the Office of Management and Budget (OMB) uses to examine federal programs. Additionally, the Government Performance and Results Act of 1993 (GPRA) also recognizes and encourages a complementary role for performance measurement and program evaluation.1


The audits to be conducted through this contract will provide ED with insights into the quality, strengths, and weaknesses of the data collection, guidance, and reporting systems and recommendations for improving data quality and usefulness. The premise for this work is that accurately collected, reported, and aggregated data on grant program performance—that is, data that measures the appropriate performance dimensions of program goals—are necessary for ED to evaluate the quality and outcomes of their programs. The program performance data audits to be conducted for a subset of ED programs will specifically address the quality of the data collection, analysis, aggregation, and reporting systems that budget and program offices and grantees use for their performance and evaluation data.


Additionally, high-quality local evaluations of those programs, conducted by grantees or their designees, could provide evidence of what is or is not working to meet program objectives and identify opportunities for improvement or further replication of successful outcomes. However, little is known about the data that these local evaluations produce, including what information is being produced beyond existing program performance measures and whether evaluation data and reports are of sufficiently high quality to help ED in program planning and replication.


The lack of information regarding the quality of data from program performance measurements and local grantee evaluations has given rise to ED’s interest in exploring the following research questions:


Research question 1. Are the data upon which program measure performance of high quality and are the methods in use to aggregate and report on those data sound?


Subquestion 1a: What are the most common patterns, opportunities, and challenges associated with the flow, sequence, and processes for collecting and reporting high quality and useful program performance data?


Subquestion 1b: What are the key variations—across programs—in the flow, sequence, and processes for collecting and reporting high quality and useful program performance data?

Research question 2. Are the local evaluations conducted by grantees (or their local evaluators) yielding useful information?


Subquestion 2a: What are the most common patterns, opportunities, and challenges associated with the flow, sequence, and processes for collecting and reporting high quality and useful evaluation data?


Subquestion 2b: What are the key variations—across programs—in the flow, sequence, and processes for collecting and reporting high quality and useful evaluation data?


To address these research questions, the Program Performance Data Audit (PPDA) project will conduct three types of audits for the programs that ED selected for review.


  1. Data-Entry Audit. Grantees may submit their data to the program office electronically or on hardcopy. No matter how grantees submit them, data must be transferred into the program’s or department’s aggregation system.2 The data-entry audit will assess the accuracy of the data in the aggregation system by comparing the grantee’s reported data to the data in the aggregation system. In addition, we will perform “face validity” verification, the simplest type of data verification, as part of the data-entry audit. This type of verification uses the reviewer’s judgment and knowledge about the data being reviewed to assess the reasonableness of the data. The two most basic face validity tests ask (1) whether the data exceeds reasonable upper or lower bounds and (2) whether there is missing data. For example, the size of the participant population in a program or the knowledge of past results could provide an upper bound when determining the face validity of data. If a grantee serves 1,000 participants, the reviewer can be certain that any subgroup of participants would be 1,000 or fewer. Any subgroup with more than 1,000 participants would fail the face validity verification test. The missing value test is straightforward. If the grantee submits no data for a required data element, it fails the missing data test.

  2. Data-Aggregation Audit. After all the grantees’ reported data are in the aggregation system, the data are aggregated to calculate program-level performance. To determine whether the aggregation system accurately aggregates the data, the data-aggregation audit will compare the results from the aggregation system to results calculated independently by the contractor conducting this audit. The contractor will use the same input data and the same methods3 as those in the aggregation system.

  3. Evaluation Audit. As part of the grants, program offices often include a requirement for grantees to conduct local evaluations. The evaluation audit will seek to determine whether grantees conducted evaluations and, if they did, whether those evaluations produced high quality reports. We will obtain completed evaluation reports when available. The presence or absence of formal evaluation reports will constitute one finding from this audit.

In addition, we have identified and adapted an evaluation quality checklist, entitled “Evaluation Report Checklist”, for reviewing grantees’ evaluation reports. This checklist will be used to assess whether grantees included key components of high quality evaluations in their reports. This checklist was developed by the Western Michigan University Evaluation Center (WMUEC) and draws upon the Program Evaluation Standards (Joint Committee on Standards for Educational Evaluation, 1994) and the What Works Clearinghouse standards.4 We have adapted this checklist to include assessments of the "appropriateness" of some of the evaluation items listed in the checklist.


We chose this checklist to adapt because it has been rigorously developed, peer reviewed (http://www.wmich.edu/evalctr/checklists/editorial-board/), and tailored specifically for the assessment of evaluation reports. Our search for checklists included a literature review and online searches, including the websites for the What Works Clearinghouse and WMUEC. We chose the "evaluation report" checklist from the 30 or more checklists that WMUEC developed (http://www.wmich.edu/evalctr/checklists/) as part of a project they started in 2001.


B.1. Respondent Universe and Sampling Methods

This section explains how programs will be identified from which to sample grantees and describes the sampling approach associated with each of the identified programs. A sampling approach is not needed for interviewing contractors and, therefore, is not addressed here.


Identifying Programs from Which to Sample Grantees


The originally proposed sample sizes for data collection for the data entry audits and the evaluation audits were based on requirements specified in the statement of work. They were intended primarily to provide a sufficient pool of local evaluation reports to allow for an assessment of the quality and rigor of these documents. The revised strategy for data entry and evaluation audits, which focuses on measuring error rates in data entry and the extent to which evaluations are conducted and used, will use a statistically based sampling strategy.


We propose a sampling strategy based on: 1) a 95% confidence interval around a 5 percent error rate for the data entry audits and 2) a reasonable margin of error for responses from surveys with grantees (and local evaluators, if applicable), assuming a 50 percent proportion.


Table 1 provides the sizes needed for samples of grantees (and local evaluators) to yield estimates having a margin of error of +/- 5 percentage points based on interview responses with a 50 percent proportion. The resulting sample sizes needed for estimates of a margin of error of +/- 5 percentage points would result in the selection of all grantees for 9 of the 11 programs selected for the Program Performance Data Audit. Likewise, the sample sizes needed to yield the desired margin of error for the data process audit of these 9 out of 11 programs would also result in a zero margin of error for the data entry error rate (see column four in Table 1). As a result, given the likelihood of a less than 100 percent response rate for grantee interviews, we propose to perform the data entry audits and conduct interviews with all grantees in these programs:


  • Equity Assistance Centers (EAC)

  • Literacy through School Libraries (LSL)

  • English Language Acquisition (ELA) State Grants (ELA-State)

  • Voluntary Public School Choice (VPSC)

  • ELA Native American/Alaska Native Children in School Program (ELA-NA)

  • Perkins Title I – Basic State Grant (Career Tech-States)

  • Perkins Title II – Tech Prep (Tech Prep)

  • OSERS Part B – State Grants (Sp Ed-States)

  • OSERS Part C – Infants and Toddlers (Sp Ed-Infants)


For the remaining two programs (Title III-National Professional Development Grants and Gear-Up), a sample of grantees will be selected. The sampling procedures to be used for these two programs are discussed in the section that follows.


OMB clearance is also being requested for interviews with program office contractors who may have been responsible for specific tasks associated with the collection, aggregation, and reporting of grantee data. We anticipate that the number of contractors may be more than 9 and therefore require OMB approval. We anticipate one contractor for each program and approximately two individuals for each contractor may have been involved with a given task. We will not attempt to interview these individuals separately but will conduct the interviews in a group setting similar to the grantee interviews. Therefore we estimate that the total number of contractors to be interviewed will not exceed eleven and therefore the number of individuals will not exceed 22.

We will, however, select this group with certainty based on information provided by the 11 program offices.

Table 1. Sample Sizes for Program Performance Data Audit Grantees






Program

Total Grantees

Sample Size (+/- 5 Percentage Points)

Margin of Error (+/- Percentage Points) for 5% Data Entry Error Rate at Sample Size

Title III National Professional Development Grants

138

102

2.2

Literacy Through School Libraries

57

57

0

English Language Acquisition (ELA) Title III State Grants

52

52

0

Voluntary Public School Choice

14

14

0

Equity Assistance Centers

10

10

0

ELA Native American Alaska Native Children in School Program

9

9

0

Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP)

209

136

2.2

OSERS Part B – State Grants

59

59

0

OSERS Part C– Infants and Toddlers

59

59

0

Perkins Title I – Basic State Grant

53

53

0

Perkins Title II – Tech Prep

53

53

0





Total Sample

713

604



NOTE: Margin of error is based on a 95% confidence interval.


Strategy for Two Programs to be Sampled


As indicated in Table 1, seventy-four percent (102 of 138) of NPD grantees and 65 percent (136 of 209) of GEAR-UP grantees will be selected for the PPDA sample. The same sample will be used for purposes of the data entry audit and grantee interviews. It should be noted that our sampling strategy for both of these programs includes a substantial majority of the grantees receiving awards during the 2008-2009 program year. We believe random sampling of GEAR-UP grantees will yield a representative sample suitable for the current study purposes. However, we believe random sampling is not appropriate for the NPD program. The NPD program provides grants to teacher education and training programs in postsecondary institutions, which means that institutions can receive more than one grant. In addition, the NPD GPRA measures address three categories of participants: in-service teachers, pre-service teachers, and paraprofessionals. Grantees can report on none of the GPRA measures, any one category of GPRA measures, or any combination of the GPRA measures. To account for variation across institutions as well as GPRA measures, we will stratify the grantees by GPRA measure category and institution. Stratifying by GPRA measure will ensure that all of the measures are evaluated. Stratifying the grantees in each measure category by institution type reduces the probability that we will sample the same institution multiple times. Within those strata, sampling will be random.


B.2. Information Collection Procedures

For the grantee interviews we will schedule the grantee project director, a person knowledgeable about a project’s performance data, and the local evaluator (if applicable) for a 60-minute telephone interview. For the contractor interviews we will schedule a 45-minute telephone interview. Before contacting the appropriate respondent, the contractor will draft a letter (see Appendix B) to each grantee and program office contractor from ED indicating the purpose of the call and the type of information to be collected as part of the audits.


Because the interviews (see Appendix C for grantee, contractor, and local evaluator protocol discussion guide) will be conducted with recipients of grants and data process contractors from ED and the interviewers will be contacting potential respondents on behalf of ED, we expect complete cooperation with the data collection interviews. The interviewing staff will all be experienced researchers who are familiar with the audit project and qualitative interviewing. They will be instructed to schedule interviews at the convenience of the respondents. The three months allowed for this activity ensures sufficient time to accommodate the schedule of grantee staff to complete the data collection. To help monitor the data-collection activities, the contractor will develop a master data base that includes all potential interviewees, relevant contact information, and disposition of interview attempts. This data base will be used to maintain internal quality assurance. The contractor conducting the interviews will provide ED with data-collection reports every two weeks. All interviews will be audio recorded with the respondents’ permission. A consent form granting the interview staff permission to record the interview will be obtained from the grantee prior to conducting the interview (see Appendix D for the consent form).


For the interviews with grantees, no data that identifies the respondent will be shared with the program office or any other ED staff. Respondents will be randomly assigned a 3-digit ID for completion of the interview. Data will be shared outside of the immediate study team only through a separate restricted-use data file constructed by DIR. Only the randomly assigned ID will remain in the restricted-use data file; all other grantee identifiers, such as the ED grant number, will be removed in the construction of the file.


After grantee interviews are completed, the contractor will store all files containing identifying information separately from those containing interview responses. Only selected members of the study team will have access to files containing information that could be used to link interview data with identifiable respondents. Project staff will adhere to the regulations and laws regarding the confidentiality of individually identifiable information.


All analysis of grantee information will be aggregated to the project level. No results will be provided outside of the immediate study team that are specific to a grantee within a given program.

After the information has been analyzed and final reports developed, all data files will become the property of ED. Data will subsequently be destroyed in accordance with the rules and regulations specified by OMB.


B.3. Methods to Maximize Response Rates

Because all of the respondents are recipients of ED grants and contracts and this data collection is part of audits for the selected ED programs, we expect full cooperation with data-collection efforts and, as a result, response rates above 90 percent. The letter introducing the project to the grantees and data process contractors will be sent by ED to emphasize the importance of the study to ED and ED’s support for these. The contractor will also schedule telephone interviews at the convenience of the respondents in an effort to secure their cooperation.

B.4. Test of Procedures

We will pilot-test the data-collection process with a small number (fewer than 9) of grantees. We expect that the pilot test will be conducted during the first two weeks of December 2011. Based on this test, adjustments will be made to the data collection processes and instruments in an effort to more effectively minimize respondent burden and ensure the collection of appropriate information necessary to address the study research questions.


B.5. Individuals Consulted on Statistical Aspects of Design

The interview data-collection plans were developed by Decision Information Resources, Inc. (DIR) and Mathematica Policy Research, Inc (Mathematica). The research team is led by Russell Jackson, project director. Other members of the evaluation team who worked on the design of the grantee and local evaluator interview protocol include Ken Jackson (DIR), Scott Peecksen (DIR), Alexander Millar (Mathematica), Jonathan Ladinsky (Mathematica), and William Borden (Mathematica). John Hall of Mathematica developed the sampling plan. Contact information for these individuals is provided in the following section.


Decision Information Resources, Inc.


Russell Jackson

(832) 485-3701


Scott Peecksen

(832) 485-3724


Ken Jackson

(832) 485-3704


Mathematica Policy Research, Inc.


Alexander Millar

(202) 250-3510


Jonathan Ladinsky

(609) 275-2250


William Borden

(609) 275-2321


John Hall

(609) 275-2357

Appendix A. Legislation Authorizing Program Performance Audit

The Elementary and Secondary Education Act, as reauthorized by the No Child Left Behind Act (NCLBA) of 2001, Title V, Part D, section 5411(a) authorizes support for activities to improve the quality of elementary and secondary education programs. The Program Performance Data Audits contract, which is being conducted to ensure that high-quality data are available to inform decisions about programs designed to improve the quality of elementary and secondary education, is an allowable activity according to that section. Work conducted under this contract will be monitored by the Office of Planning, Evaluation and Policy Development.

In addition to the NCLBA, the Government Performance Results Act of 1993 (GPRA) requires that all agencies develop performance plans for every program in an agency’s budget (section 1115). These plans must express performance goals in an objective and measurable form. The GPRA also requires performance reports from every program in each agency budget (section 1116). These reports must cover three years of performance results by the year 2002 and report on whether a program has met the performance goals in its performance plan. In addition, each goal that is not met by a program must have an explanation and a plan for improvement as part of the program performance report.


Appendix B. Grantee Participation Request Letter


Greetings [],


The Office of Planning, Evaluation, and Policy Development (OPEPD) in the U.S. Department of Education (ED) has contracted with Decision Information Resources and Mathematica Policy Research to audit eleven program offices. In particular, our audit will assess the quality of these eleven program’s procedures for collecting and reporting program performance and evaluation data.


The first step in our audit was to initiate communication with U.S. Department of Education program offices, including [INSERT NAME OF PROGRAM]. In the fall of 2009, we contacted ED staff at [INSERT NAME OF PROGRAM] to ask questions and receive documentation about their


  • performance reporting system

  • reporting schedules

  • procedures for aggregating grantee performance data

  • procedures for conducting and reporting evaluation results

The reviews of these documents provided a basic understanding of the activities program offices and grantees engage in to collect, analyze, and report program performance and evaluation data. However, in order to provide ED with a better and more in-depth understanding of the quality and utility of program performance and evaluation data, we recommended to ED that we conduct interviews with [INSERT NAME OF GRANTEE OR CONTRACTOR]. The detailed information gained from these interviews will supplement the program office and grantee documents we already collected and will allow us to address fully the following research questions.


Research question 1. Are the data upon which programs measure performance of high quality and are the methods in use to aggregate and report on those data sound?


Research question 2. Are the local evaluations conducted by grantees (or their local evaluators) yielding useful information?


After consulting with U.S. Department of Education [INSERT NAME OF PROGRAM] program, Budget and Policy and Program Studies Service staff, we are now prepared to request (formally) your participation in these interviews. Although participation is not mandatory, as a grantee, you did agree to participate in audits. As a result, your participation will be invaluable in assisting ED to more effectively identify, collect, and report program performance measures that appropriately capture what your program was designed to accomplish.


We would like to emphasize that all data collected during the interview will be used only for the purpose of the audit and all efforts will be made to maintain confidentiality to the extent provided by law. For example, no data that identifies the respondent will be shared with the program office or any other ED staff. Data will be shared outside of the immediate study team only through a restricted-use data file where all grantee identifiers (i.e., respondent name, grantee number, etc.) have been removed from the file. Additionally, all results reported to ED will be reported in aggregate form for the entire program. No results will be provided for specific grantees within a program.


The interview will take approximately [INSERT 45 MINUTES FOR CONTRATORS OR 1 HOUR FOR GRANTEES] to complete. Accordingly, we would like to schedule a brief phone call with you over the next two weeks to discuss the details of the interview, any questions you have, and the best time to schedule the interview? We will schedule the interview around your schedule and respectfully request that the following three people participate: [INSERT APPROPRIATE RESPONDENTS FOR GRANTEES AND PROGRAM OFFICE CONTRACTORS]


Thank you in advance for your assistance with the data audits and I look forward to hearing from you by telephone or email.


With much appreciation,

Scott Peecksen

[email protected]

(832) 485-3724


Paperwork Reduction Act Statement:

 

According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless such collection displays a valid OMB control number.  Public reporting burden for this collection of information is estimated to average 1 hour per response for grantees and 45 minutes for program office contractors, including time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. The obligation to respond to this collection is voluntary. Send comments regarding the burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to the U.S. Department of Education, 400 Maryland Ave., SW, Washington, DC 20210-4537 or email [email protected] and reference the OMB Control Number XXXX-XXXX.



Appendix C. Grantee, Contractor, and Local Evaluator Protocol—Discussion Guide


This appendix contains a description of how the protocol will be administered and the areas of assessment that will be covered.


Administration and Areas of Assessment for the Grantee, Contractor, and Local Evaluator Protocol

Assurances of Confidentiality and Voluntary Participation

All data collected during the interview will be used only for the purpose of the audit and all efforts will be made to maintain confidentiality to the extent provided by law. No data that identifies respondents will be shared with the program office or any other ED staff. Data will be shared outside of the immediate study team only through a restricted-use data file where all grantee identifiers (i.e., respondent name, grantee number, etc.) have been removed from the file. All results will be reported in aggregate form; no results will be provided for specific grantees within a program.


Participation in this project is voluntary.

Paperwork Reduction Act Statement:

 

According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless such collection displays a valid OMB control number.  Public reporting burden for this collection of information is estimated to average 1 hour per response for grantees and 45 minutes for program office contractors, including time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. The obligation to respond to this collection is voluntary. Send comments regarding the burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to the U.S. Department of Education, 400 Maryland Ave., SW, Washington, DC 20210-4537 or email [email protected] and reference the OMB Control Number XXX-XXXX.

Administration and Areas of Assessment for the Grantee, Contractor, and Local Evaluator Protocol


Part 1 will identify your knowledge about and/or the role(s) you have played in eight tasks (listed below) associated with the data flow sequence of the 2008–2009 GPRA and non-GPRA performance measures, including evaluation data.


A. Derivation, analysis, and reporting of GPRA and non-GPRA data

E. Grantee collection and submission of data

B. Provision of guidance regarding submission of performance data

F. Data quality checks and validation of grantee data

C. Provision of training

G. Aggregation of grantee data

D. Provision of technical assistance

H. Dissemination and use of program performance results


First, we will define each of the eight tasks and ask if you or a contractor have had any knowledge about or role in these eight tasks. If you indicate that you did have knowledge about or a role in a particular task(s), we will ask you questions in a specific module(s) for that task(s) in Part 2. If you indicate that you did not have knowledge about or a role in a particular task(s), we will ask you: 1) who we should talk to in order to gain insight into how this task(s) was performed and 2) how we can contact this person.


In addition, we will ask if you had knowledge about or were involved with any other tasks that were not included in tasks A through H. If you indicate “yes”, we will ask you to describe this other task in detail so that we can ensure that it is not already covered by questions in one of the eight modules in Part 2. If it is determined that the other described task is legitimately different from tasks currently listed, then we will ask you to share your knowledge and role as it relates to this unlisted task.


Task Definitions and Areas of Assessment Regarding Task Roles and Knowledge


Below are the definitions of the eight tasks (listed above and defined below) associated with the data flow sequence of the 2008–2009 performance measures. Below these definitions are the topic areas that we will inquire about, if it is determined that you had a role in or knowledge about these tasks.

Task A. Derivation, Analysis, and Reporting of GPRA and Non-GPRA Data

Derivation, analysis, and reporting of GPRA and non-GPRA data, including evaluation data, are defined as the processes by which the data were selected, analyzed, and reported during the 2008–2009 reporting cycle. We would like to discuss:


A1. Description of each GPRA and non-GPRA measure your organization collected


A2. Process for determining what GPRA and non-GPRA performance data your organization collected


A3. Evaluation plan for GPRA and non-GPRA data


A4. Analysis of GPRA and non-GPRA data


A5. Use of GPRA and non-GPRA results


A6. Development of written report and who received it


A7. Changes in GPRA and non-GPRA data that your organization collected since the 2008–2009 reporting cycle

Task B. Provision of Guidance Regarding the Submission of Performance Data


The provision of guidance regarding submission of GPRA and non-GPRA performance measures, including evaluation measures, is defined as the process by which instructions, policy information, and definitions were provided to grantees on how to submit performance data for the 2008–2009 reporting cycle. We will discuss:


B1. Description of guidance

B2. Methods by which guidance was provided


B3. People within your organization who received the guidance


B4. When guidance was provided during the 2008–2009 program cycle


B5. Problems interpreting the 2008–2009 performance measure guidance provided by the program office

B6. Usefulness of guidance provided by program office

B7. Changes in the types of guidance provided by the program office or the methods by which they were provided since the 2008–2009 reporting cycle

Task C. Provision of Training


Provision of training is defined as training events, classes, or materials that were designed to help grantees with their reporting of GPRA and non-GPRA program performance data, including evaluation data, for the 2008–2009 reporting cycle. We will discuss:


C1. Description of trainings provided


C2. Primary mode used to provide the training (conferences, webinars, etc.)


C3. When trainings occurred during the 2008–2009 data reporting cycle


C4. Who within the grantee organizations received the training


C5. Usefulness of training


C6. Changes in the type of training(s) provided or the procedures by which training(s) were implemented or provided since the 2008–2009 reporting cycle

Task D. Provision of Technical Assistance


Provision of technical assistance (TA) is defined as any materials or activities that were used to assist grantees in their efforts to report GPRA and non-GPRA program performance data, including evaluation data, during the 2008–2009 reporting cycle. TA differs from training in that it occurs when grantees seek assistance about specific program reporting issues that they may be experiencing, and that it is usually provided to one grantee at a time. We will discuss:


D1. Description of TA provided by the program office


D2. Primary mode used to provide the TA


D3. When the TA occurred during the 2008–2009 data reporting cycle


D4. Who within the grantee organization received the TA


D5. Usefulness of TA provided by the program office


D6. Changes in the type of TA provided or the procedures by which TA was implemented or provided since the 2008–2009 data reporting cycle

Task E. Grantee Collection and Submission of Data


Grantee collection and submission of data is defined as the processes that grantees used to collect and report GPRA and non-GPRA program performance data, including evaluation data, to the program office or its contractor during the 2008–2009 data reporting cycle. We will discuss:


E1. Processes grantees used to collect data to be used for the calculation and submission of 2008–2009 performance data


E2. Processes grantees used to submit performance data to the program office

E3. Availability—prior to report deadline—of data your organization needed to report on 2008–2009 performance

E4. Extent of time it took to review 2008–2009 performance results and to correct any problems before submitting them

E5. Whether your organization submitted performance results on time


E6. Problems encountered in collecting data used for the performance reports for the 2008–2009 reporting cycle


E7. Problems encountered in calculating data used for the performance reports for the 2008–2009 reporting cycle


E8. Problems encountered in reporting accurate performance results for the 2008–2009 reporting cycle


E9. Feedback provided by the program office with regard to 2008–2009 collection and submission process


E10. Changes in the procedures to collect and submit data since the 2008–2009 reporting cycle


Task F. Data Quality Checks and Validation of Grantee Data


Data quality checks and validation of grantee data are defined as any procedures—during the 2008–2009 reporting cycle—that the program office or its contractor had in place to check that the data they received: 1) was of sufficient quality to calculate all the performance measures and 2) accurately captured the program performance measures required for grantees. We will discuss:


F1. Who assessed the quality and consistency of your 20082009 performance data

F2. Challenges with regard to data edits, data cleaning, or any other automated processes used to assess the accuracy of your 20082009 performance data

F3. Challenges with regard to the validation of participant level data that was used to calculate and report 2008–2009 performance data

F4. Changes in the procedures or activities used to assess the reliability or validity of the performance data since the 2008–2009 reporting cycle

Task G. Aggregation of Grantee Data


Aggregation of grantee data is defined as any procedures the program office or its contractor used during the 2008–2009 reporting cycle to combine performance results, including evaluation results, from all grantees into a single result for the entire program. We will discuss the following topic areas:


G1. Description of procedures used to aggregate grantee data to the program level

G2. The extent to which the aggregated performance results provided the information that the program office needed to assess 2008–2009 program performance


G3. Description of how aggregation procedures were verified or tested for the 2008–2009 results


G4. When aggregation processes occurred


G5. Problems the program office experienced with regard to the aggregation of grantees’ 2008–2009 data

G6. Changes to the aggregation procedures (or timing and testing of them) since the 2008–2009 reporting cycle

Task H. Dissemination and Use of Program Performance Results

Dissemination and use of program performance results are defined as the process by which the program office or its contractor distributed the performance results of the program for the 2008–2009 reporting cycle. This includes the policy context and background information that were provided along with program results. We will discuss:


H1. Program office contacts regarding your reported 20082009 performance results

H2. Receipt of aggregated 2008–2009 performance reports for your program

H3. Use of the analysis of the 2008–2009 performance data

H4. Changes in the use of the program office’s or your organization’s performance results since 2008–2009 reporting cycle

H5. Changes in the procedures or activities associated with generating performance results since the 2008–2009 reporting cycle

Task I. Other Tasks Associated with the Data Flow Sequence of the 2008–2009 Performance Measures


Other tasks are defined as tasks—associated with the 2008–2009 performance measures—that involved activities and processes not described in the eight data tasks we previously defined. We will discuss:


I1. Person(s) responsible for conducting task


I2. Description of activities to conduct this task


I3. When this task occurred during the 2008–2009 data reporting cycle


I4. Problems experienced with regard to conducting this task for the 2008–2009 performance reporting cycle


I5. Changes associated with conducting this task since the 2008–2009 performance reporting cycle

Wrap Up – Closing


1. Do you have any questions that you would like to ask us?


2. Do you have any other suggestions or ideas to add that may inform us or lead to improvements in the flow sequence of performance measures and evaluation data?

Appendix D. Consent Form



Release/Consent Form Granting Permission to Audio Tape Interview



I, _____________________________________, hereby grant permission to Decision Information Resources, Inc. and Mathematica Policy Research, Inc. to audio tape the interview being conducted for the Program Performance Data Audit Study sponsored by the Department of Education.


I understand that all recorded information will be used only for the purpose of the audit and all efforts will be made to maintain confidentiality to the extent provided by law. No information generated as a result of this recording that identifies respondents will be shared with anyone outside of the immediate study team, including the program office or any other ED staff.



_____________________________

Full Name (Please print)



_____________________________

Full Name of Grantee Organization (Please print)



_____________________________

Signature



_____________________________

Date

1 Performance Measurement and Evaluation: Definitions and Relationships. GAO-05-739SP, U.S. Government Accountability Office. May 2005.

2 A data aggregation system can come in many forms, ranging from one or more spreadsheets to a fully automated reporting system that integrates reporting, aggregation, and data analysis for management.

3 These methods will come from data-element and report specification documents or from interviews with the program office staff and, where applicable, their contractors.

4 http://ies.ed.gov/ncee/wwc/pdf/wwc_version1_standards.pdf

File Typeapplication/msword
AuthorAlexander Millar
Last Modified Bykatrina.ingalls
File Modified2011-11-10
File Created2011-11-10

© 2024 OMB.report | Privacy Policy