U.S. Department of Education
Mathematics and Science Partnerships Program
Office of Management and Budget
Clearance Package Supporting Statement
And Data Collection Instrument
August 31, 2006
TABLE OF CONTENTS
A. JUSTIFICATION
A.1 Circumstances Making The Collection of Information Necessary 1
A.2 Purposes and Uses of the Data 2
A.3 Use Of Technology to Reduce Burden 2
A.4 Efforts To Identify Duplication 3
A.5 Methods to Minimize Burden on Small Entities 3
A.6 Consequences of Not Collecting Data 3
A.7 Special Circumstances 3
A.8 Federal Register Comments and Persons Consulted Outside The Agency 3
A.9 Payments or Gifts 4
A.10 Assurances of Confidentiality 4
A.11 Justification of Sensitive Questions 4
A.12 Estimates of Hour Burden 4
A.13 Estimates of Cost Burden to Respondents 5
A.14 Estimate of Annual Cost to the Federal Government 5
A.15 Program Changes or Adjustments 5
A.16 Plans For Tabulation And Publication of Results 6
A.17 Approval To Not Display The OMB Expiration Date 8
A.18 Explanation of Exceptions 8
APPENDIX A. AUTHORIZING LEGISLATION
APPENDIX B: DATA COLLECTION INSTRUMENT
A. JUSTIFICATION
Each funded Mathematics and Science Partnerships program (MSP) is required to develop an evaluation and accountability plan that includes objectives that measures the impact of funded activities. Plans must include measurable objectives to increase teacher content knowledge and student achievement. Other measurable objectives may include increasing the number of mathematics and science teachers who participate in content-based professional development and to increase student participation in advanced mathematics and science courses. Although MSP is a formula grant to the States, the statute (Title II, Part B of NCLB) requires projects to report annually to the Department documenting their progress towards reaching their stated objectives and goals.
Currently, the MSP’s annual reporting requirement calls for state-funded projects to complete the OMB-approved Project Profile, forward a narrative describing the project’s impact on student achievement, and forward any third-party project evaluation reports. Additionally, in order to ensure that the Department has an accurate record of state-funded projects, State coordinators are asked annually to forward a listing of current projects funded, forward a listing of discontinued projects, and verify project information. With the increases in annual funding and the subsequent increases in the number projects, keeping track of the projects and monitoring the States’ implementation of the program have become quite a cumbersome task.
The 350 projects funded in the first year were extremely varied. Some States funded projects that focused their efforts on improving teacher content knowledge at the district level while others funded projects that focused their efforts on improving teacher content knowledge at the individual classroom level. Because of the varying nature of the state-funded projects, many respondents reported that they found it difficult to use the Project Profile to accurately document their project’s impact on improving teacher content knowledge and student achievement. The OMB-approved Project Profile, the data collection tool used by projects, failed to adequately measure projects’ impact on student achievement and teacher content knowledge.
Therefore, through careful analysis of current reports and in consultation with State MSP coordinators and sub-award grantees, it was determined that the current annual reporting process be revised. It was suggested that the various reporting requirements be consolidated into one and that additional reporting space be provided in order to allow funded projects to accurately include achievement data.
The revised APR streamlines the annual reporting process. The revision allows projects to upload data and third-party reports, provides additional space for project’s to describe their impact on student achievement and teacher content knowledge, and requires State coordinators to review reports and verify project information. By structuring the reporting so that all MSPs are required to provide standardized data, the Department will be better able to examine outcomes across funded projects and effectively monitor the expansive program.
This information will be collected annually from approximately 600 MSPs in the third year of data collection. If a MSP is funded for multiple years (up to three), they will provide data for each year they receive funding. The statute requires all locally funded projects to report annually to the Department documenting project’s progress towards accomplishing its goals and objectives. Additionally, the Department will be better able to examine outcomes across funded projects. See Appendix B for a copy of the proposed data collection instrument.
We will use a variety of advanced information technologies to maximize the efficiency and completeness of the information gathered for this evaluation and to minimize the burden the data collection places on the MSPs. First, we will use an Internet-based data collection system to collect all data elements. This system will allow the MSPs to complete the forms at a time that is convenient to them. It will also help project staff and State MSP coordinators track the data submissions as the MSPs fill in the forms. Second, we will pre-populate the Internet-based forms with any available information from the winning partnership proposals. For example, all of the contact information is available from this source. When the users log onto the system, they will be allowed to update this information but will not need to provide it as part of their submission.
Third, to calculate the number of teachers who showed significant gains in content knowledge (section D) in a statistically valid and comparable way that also reduces burden on grantees, the MSP federal program office will provide grantees with an Excel spreadsheet with embedded formulas. Grantees will enter into the spreadsheet the pretest scores and posttest scores for the teachers they test (using one spreadsheet per test). The spreadsheet will calculate the needed statistics (a dependent or paired-samples t-test) and produce a report for grantees showing the total number of teachers and the number who showed significant gains. Grantees will report this information in section D of the APR. The MSP program office will then aggregate this information and use it for GPRA reporting.
There are no other instruments collecting the same data. The legislation that authorizes MSP requires each of the funded projects to report annually to ED documenting the partnerships’ progress in meeting its goals and objectives. The current reporting method employed requires state-funded projects to complete the OMB-approved Project Profile, write a narrative describing program’s impact, and forward any third-party evaluation reports. Each component serves to provide federal and state program officers with a comprehensive overview of the funded project as well as documents individual project’s impact on student achievement. The revised APR consolidates the three components into one thus eliminating duplication and reducing the burden unto individual respondents.
Small entities are not affected by this program. The lead agency for each MSP is generally a local school system.
This data collection is designed with a twofold purpose. First, in providing this information, the MSPs satisfy most of the reporting requirements they accepted as part of their other project requirements. Second, this data collection standardizes the required reporting across all MSPs. This will greatly enhance the quality and comparability of the resulting data.
None of the special circumstances listed apply to this data collection.
We have worked closely with state coordinators for the Mathematics and Science Partnerships program to develop a data collection instrument that meets the needs of the Department but does not put undue burden on the MSPs. To this end, we convened a meeting with state coordinators of the Math and Science Partnerships program in June 2006. One purpose of this meeting was to go over the proposed data collection instrument and get feedback from the state coordinators. It was based on the discussions in this meeting that we made substantial revisions to the proposed data collection instrument.
In Spring 2006, we held several regional meetings with project directors and evaluators. At each of these meetings we circulated the current document to determine if (1) MSPs would be able to provide the data we were requesting and (2) if it adequately measures student impact.
We plan to pilot test the data collection instrument with several MSPs in the Winter of 2006. Based on this pilot test we will revise the burden estimates.
Additionally, we published a 60-day federal register notice on ________________, and received no comments. We also published a 30-day federal register notice on ________________.
No payment or gifts to respondents will be made.
There is no assurance of confidentiality.
There are no questions of a sensitive nature.
Annually, all funded MSPs will be asked to complete the data collection instrument. Due to the increase in the number of funded projects, the overall burden of the program has increased. However, since the reporting requirements have been consolidated, the overall burden to individual respondents has been reduced.
We estimate that the form will take an average of 14 hours to complete. For the purpose of this discussion, we have assumed that approximately 600 partnerships will be awarded, for a total of 8,400 burden hours. This represents an overall burden increase of 1,750 hours. The increase is due to the increase in respondents, numbering approximately 250. The cost to respondents is estimated to be $30 per hour for a total cost to 600 respondents of approximately $252,000 for each year of data collection. This hourly rate was estimated by ED based on previous experience.
NOTE: Based on OMB feedback, we are revising the estimate of the burden for completing the APR to include the proposed reporting revisions. Staff for the Data Quality Initiative (DQI) have “field tested” this data tool and estimate that it will take each project an additional 15 minutes to enter the necessary information into the tool. We expect about 600 projects to submit APRs, therefore the overall burden should be changed to add 150 hours per year. This means the overall burden is 8,550 hours, with 14.25 hours per respondent.
There are no additional respondent costs associated with this data collection.
The annualized cost to the federal government is estimated to be $262,107. The estimated total cost to the government for tasks related to the online data collection instrument (including the system development and maintenance, data collection, data analysis, and reporting) is $524,213 for both Option Years. The estimated annualized cost for these tasks is $262,107. The deliverables will include: an online data collection system and an annual report of aggregate analysis of APR data.
Tasks |
Labor Hours |
Labor Cost |
Total Costs |
Direct |
|
|
|
|
|
2 Web-based system development |
2176 |
$171,492 |
$1,735 |
$173,227 |
2 Web-based system maintenance |
1569 |
$128,987 |
$2,280 |
$131,267 |
3 Collect online APR data |
961 |
$45,546 |
$4,379 |
$49,925 |
4 Analyze APR data and prepare report |
2231 |
$159,905 |
$9,889 |
$169,794 |
|
|
|
|
|
Total estimated costs for Options 1 and 2 |
|
|
|
$524,213 |
Annualized estimated cost |
|
|
|
$262,107 |
|
|
|
|
This request is for a revised data collection. The current Annual Performance Report does not adequately measure projects’ impact on student achievement. In addition, several monitoring tools have been incorporated into the APR thus reducing the burden unto the respondents.
There are no plans to formally publish the results of this data collection. Rather, the data obtained through this data collection will be used by the program office to monitor the funded MSPs and inform the Department’s GPRA indicators.
The data from Section D: Government Performance and Results Act (GPRA) Reporting of the Annual Performance Report will be used by the MSP program office to report annually on their teacher content knowledge and student achievement GPRA indicators. Through questions 8 and 9, grantees will report at the project level on these indicators for the previous year. The MSP program office will aggregate these numbers to report on GPRA.
Teacher Content Knowledge
The teacher content knowledge GPRA measure is the percentage of MSP teachers who significantly increase their content knowledge, as reflected in project-level pre- and post-assessments. Percentages will be calculated separately for math and science teachers. To calculate these percentages, the MSP program office will sum, across all grantees, the number of mathematics teachers reported with significant gains from question 8c, and will divide that number by the sum, across all grantees, of the number of mathematics teachers with both pretest and posttests in mathematics content knowledge from question 8b. They will do the same for science teachers using the responses to questions 8f and 8e.
To gauge data completeness, the MSP program office will divide (separately for mathematics and science) the sum, across all grantees, of the number of teachers with both pretests and posttests in the subject (questions 8b and 8e) by the sum, across all grantees, of the total number of teachers receiving MSP professional development (questions 8a and 8d). The resulting percentage will tell the MSP program office what proportion of all MSP teachers are included in the GPRA indicators for teacher content knowledge.
As mentioned in A3 (Use of Technology), the MSP program office will provide grantees with software to use to determine the number of teachers who have made significant gains in content knowledge. The software uses a statistical test called a dependent t-test or paired-samples t-test to calculate, with 85 percent certainty, the number of teachers who showed significant gains on content knowledge tests. With grantee inputs of pretest and posttest scores, the software will produce a report that grantees can use to respond to Annual Performance Report questions 8b, 8c, 8e, and 8f. Procedurally, when either the pretest or posttest scores are missing, the observation will be discarded by the software and not used in the calculations.
Specifically, the software will use two steps to determine which teachers made significant gains. For step 1, the software will run a t-test to calculate a benchmark to be used in step 2 to compare to the gains of the individual teachers in the testing group. The t-statistic in step 1 is computed as the difference of the average pretest score and the average posttest score, divided by the standard error of the difference. For the t-statistic, we will accept a p-value of .15 or less to conclude that the observed average increase in scores is statistically significant and can be used as the benchmark for step 2. This relatively high p-value is appropriate for this purpose, as it provides a level of confidence of 85 percent—suitable for GPRA reporting. Since we have virtually the universe (and not a sample) of MSP teachers, statistical significance here is representing substantive importance. Our goal was to select a p-value that would include teachers who made substantively important gains and exclude teachers whose gains, while measurable, were obviously less important. We feel that the standard p-value of .05 is unnecessarily restrictive and would exclude many teachers who had made substantively important gains. The p-value of .15, while a somewhat subjective selection, in our judgment is most likely to meet our goals of inclusion and exclusion.
Note that if the calculated p-value for a test is greater than .15, the benchmark cannot be used for comparison purposes and the conclusion will be there is no significant improvement for the tested group as a whole, or the individual teachers. All of the teachers in tested groups where the p-value of the t-statistic is greater than .15 belong in the category of “no significant gains.”
If the benchmark (t-statistic) is statistically significant at the .15 level, the software will move on to step 2, determining the number of teachers in the tested group who made significant gains in content knowledge. The software will calculate individual gain scores for teachers with both pretest and posttest scores and will standardize each gain score by subtracting the mean of the gain scores and dividing by the standard deviation of the gain scores. The software will then compare each teacher’s gain score to the benchmark gain score from step 1. Teachers whose gain scores are equal to or greater than the benchmark will be counted as having made significant gains. Teachers whose gain scores are less than the benchmark will be counted as having made no significant gains.
Once step 2 is complete, the software will produce a report for the grantee with the counts of teachers in each group (significant gains, no significant gains). Grantees who administer more than one content knowledge test will need to complete a separate spreadsheet for each tested used and then aggregate the results to respond to Annual Performance Report questions 8b, 8c, 8e, and 8f.
Student Achievement
The student achievement GPRA measures are: 1) The percentage of students in classrooms of MSP teachers who score at the basic level or above in State assessments of mathematics or science, and 2) The percentage of students in classrooms of MSP teachers who score at the proficient level or above in State assessments of mathematics or science. Grantees will report separately for mathematics and science.
To calculate the first percentage (basic or above) in mathematics, the MSP program office will sum, across all grantees, the number of students who scored at basic or above in mathematics from question 9c, and will divide that number by the sum, across all grantees, of the number of students with student assessment data in mathematics from question 9b. They will do the same for science using the responses to questions 9g and 9f. To calculate the second percentage (proficient or above) for mathematics, the MSP program office will sum, across all grantees, the number of students who scored at proficient or above in mathematics from 9d, and will divide that number by the sum, across all grantees, of the number of students with assessment data in mathematics from question 9b. They will do the same for science using the responses to questions 9h and 9f.
To gauge data completeness, the MSP program office will divide (separately for mathematics and science) the sum, across all grantees, of the number of students with assessments in the subject (questions 9b and 9f) by the sum, across all grantees, of the total number of students taught by MSP teachers (questions 9a and 9e). The resulting percentage will tell the MSP program office what proportion of all MSP students are included in the GPRA indicators for student achievement.
Grantees in the seven states with assessment systems that have only one level below proficient are instructed to report “not applicable” or NA for questions 9c and 9g, since their reporting would be 100 percent of the students tested.
All data collection instruments will include the OMB expiration date.
No exceptions are requested.
This data collection applies to the universe of MSPs and therefore does not employ any statistical methods.
File Type | application/msword |
File Title | SUPPORTING STATEMENT |
Author | Beth Sinclair |
Last Modified By | james.hyler |
File Modified | 2007-02-06 |
File Created | 2007-01-12 |