ED response to OMB's Follow-up Comments

Responses to follow-up OMB comments 9 22 09.doc

Evaluation of the Personnel Development to Improve Services and Results for Children with Disabilities Program

ED response to OMB's Follow-up Comments

OMB: 1850-0869

Document [doc]
Download: doc | pdf

u. s. dEPARTMENT OF eDUCATION

institute of education sciences

National Center for Education Evaluation and regional assistance


to: Office of Management and Budget

from: John Rice, IES, ED

THROUGH: Kathy Axt, Office of Management, ED

CC: Jonathan Jacobson, IES, ED

subject: IES Responses to OMB Follow-up questions to Personnel Development Program (PDP) Evaluation Clearance Package


date: September 22, 2009


This memo contains each of OMB’s follow-up comments to Personnel Development Program

(PDP) Evaluation Clearance Package followed by IES’s responses.


OMB Comment A1:

Some of the PDP measures look at quality, relevance, and utility using an independent expert panel. Please share the rubrics IES will use in determining quality, relevance, and utility and how this methodology compares to the one OSEP uses.

IES Response to Comment A1:

The contractor on this study is in the process of drafting the rubrics. IES will be happy to share these with OMB once the drafts are ready, which is projected to be by November 2009.


There are two important distinctions between the IES and OSEP expert panel reviews of IHE courses-of-study, which makes the IES review non-duplicative. First, there is a distinction between what each expert panel will assess. The OSEP expert panel rates the quality of all classes within a course of study regardless of whether or not the classes were created or modified using PDP funds. The IES expert panel will rate the quality of only additions or modifications made to the courses of study since the current PDP grant award. This is because current PDP grant funding can be assumed to have been used towards additions and modifications made since the current grant was awarded and not towards course-of-study content that existed prior to current grant award.


Second, the IES evaluation will conduct a more in-depth review of materials to assess quality compared to the OSEP review. The IES evaluation will examine the quality of not only classes but also examine faculty credentials, qualifications, and experience; as well as the quality of the practicum and testing requirements for students to complete the course-of study. Also, the criteria used to define quality for the OSEP expert panel are ten broad indicators whereas the criteria for the IES review are based on the research literature in each course-of-study’s specific content area. Finally, the measures of quality in the IES evaluation will be continuous whereas the OSEP panel measure will be categorical (i.e., yes/no). The continuous measure proposed by IES will better capture variation in quality across courses-of-study.




OMB Comment A2:

Given the current limitations on the SOTS data, what kinds of analyses can be done through this evaluation on how fellows fare once they exit the program? If too much data will be missing, when does the Department think a robust analysis can be done?

The limitations of using SOTS data for the current evaluations are that by the time IES would need the data for the evaluation (1): data will have been collected for a very small percentage of the scholars receiving stipends; and (2) the data collected will not necessarily represent the typical scholar receiving a stipend. This means that the SOTS data, at least within the timeframe of the current PDP evaluation, would not adequately address the question regarding where stipend recipients are employed after program completion. Unfortunately, no other extant data collections exist at this time that systematically track scholars once they complete courses-of-study, and logistical, time, and cost considerations prevent the current evaluation from collecting these data independently.

However, an analysis using the SOTS data that contains a sufficiently large and representative sample of stipend recipients will likely be possible by 2012. This is the earliest date given that the majority of stipend recipients require four years to complete their courses-of-study and that they have a five-year grace period before they are required to begin their obligation. The possible use of the SOTS data in future evaluations will be part of on-going discussions between IES and OSEP.

OMB Comment A3:

Is [IES] essentially saying that everyone (both currently funded and currently non-funded) in the study has been funded at one point or another? What are the implications of this analytically?

ED responses to Comment A3:

Based on a preliminary review of grant applications, the evaluation contractor estimates that approximately 60 percent of applicants not funded through the FY06 and FY07 competitions had their courses-of-study been funded by a PDP grant at least once in the prior seven years. This has no implications for the analyses in the current evaluation because comparing funded and non-funded applicants is not the main focus of the study. The study’s expert panel has advised us against making numerous comparisons between the unfunded and funded groups because the validity of the findings would be suspect due to the fact that so many of the unfunded applicants had been funded in recent years.

The focus of the main analyses will be on describing the characteristics of the applicants funded in FY06 and 07 and the scholars who are trained through the funded applicants’ courses-of study, and determining the quality of course-of-study additions or modifications funded through the current PDP grant. Data will be collected from non-funded applicants primarily to describe their characteristics, to determine what becomes of their courses-of study once they did not receive funding (e.g., cease to exist, training further scholars), and to determine why students drop-out of the unfunded courses. This information on the non-funded courses-of study will provide valuable information about special education personnel preparation writ large.

OMB Comment A4:

What program changes would need to be made to be able to measure program impact? For example, would we need to require grantees to collect student outcome data for the students taught by fellows? We’re primarily interested in changes that can be made at the administrative level.

ED Response to Comment A4:

Examining the impacts specifically of the PDP on students receiving special education services is not feasible given the number of personnel preparation grants awarded by the program. According to our expert panel, the likelihood of finding statistically significant results on student academic outcomes is small, given the multitude of variables that affect the performance of children (e.g., special education teachers spend relatively little contact with students especially compared to general education teachers). Therefore, an extremely large sample size would be required to meaningfully examine the effect of the PDP program on the achievement of special education students. In fact, it is likely that the PDP could not award an adequate sample size of PDP grantees (the unit that would be randomly assigned in an impact study and thus the unit of analysis used to estimate impacts) for these very small effect sizes to be detected.


Short of an impact study, linking data between special education providers trained through the PDP and students who receive special education services would be an important next step in providing comprehensive descriptive data on the PDP that would address questions beyond the current evaluation, for example, those concerning student outcomes. Through the SOTS, OSERS has already committed to collecting data that link scholars to the schools in which they provide special education services. The next logical step, linking these special education providers to the special education students in these schools, would require access to states’ longitudinal data on the student academic outcomes.


OMB Comment A5:


But does the data have a school identifier? If so, could we use that to determine if the school’s a high-need one?


IES Response to Comment A5:


Yes, the data will have a school identifier which could be used to determine if the school met a definition of high-needs. However, the larger issues with regards to the SOTS data as already discussed (see IES Response to Comment A2) preclude the use of these data in the current evaluation.


OMB Comment A6:

Do we know if all fellows have taken a teacher certification test? For example, I thought OSEP found that not all teachers had taken the PRAXIS.

ED Response to Comment A6:

Based on the evaluation contractor’s review of the proposed study sample, approximately 70 percent of the funded courses- of-study require some type of national or state-level certification test upon completion of the course-of-study; approximately 20 percent of these require the PRAXIS II test. For this evaluation, we propose to collect data on all available certification test scores and convert the scores to a common metric.


OMB Comment A7:

Can we see [the rubrics that will be used by the expert panel]?


ED Response to Comment A7:


The contractor on this study is in the process of drafting the rubrics. IES will be happy to share them once the drafts are ready in November 2009.


OMB Comment A8:


The SOTS should have data on the reasons non-completers did not finish – inadequate resources, poor grades, moved, etc. Is it that this data is incomplete?

ED Response to A8:


The APRs (not the SOTS) include data regarding the reasons why non-completers did not finish courses-of-study. However, IES proposes to collect these data from unfunded applicants as well as from the funded applicants. This is one outcome where differences between the unfunded and funded group will be examined. If we used only the APR data for the latter, it would mean a lack of parity between the sources of data between the two groups, which would raise issues about the veracity of the comparison. Thus, IES proposes collecting these data from both the funded and non-funded applicants through the PDP evaluation’s IHE survey only.


OMB Comment A9:

This is concerning. The Data Quality Initiative, a contract run by Westat, is supposed to be working on improving the data quality for this program (and a few others). Would it make sense to wait on this part of the evaluation until OSEP can implement the recommendations of the DQI?

ED’s Response to A9:


IES’s initial response to this question was based on conversations with OSEP that took place several years ago while the PDP evaluation was still in the planning stages. Since that time, IES has learned that OSEP has placed great effort in improving the quality of data from grantees. Through the DQI, IES is providing technical assistance to OSEP on options for improving procedures for gathering performance data on IDEA Part D grantees.  Because OSEP may always refine the process of gathering data for GPRA purposes, we do not think descriptive data collection for the evaluation, nor the use of any relevant APR data submitted by grantees, should delay the PDP evaluation.  Any relevant and documented changes in OSEP data collection procedures and/or instructions for different years included in the IES study will be noted in appendices to the IES evaluation report.

OMB Comment A10:

Would IES consider comparing the results of their grantee analyses with the grantee self- evaluations? It would be helpful to see how the two are similar/different.

ED Response to A10:


Because the final grantee self-evaluations will be completed after the scheduled release of PDP evaluation report, they cannot be included in the current evaluation. However, the possibility of comparing the final grantee self-evaluations with the PDP evaluation results will be considered as part of OSEP and IES’s discussions concerning future evaluations of the PDP program. Because the grantee self-evaluations may not be limited to the additions and changes to Courses of Study, these comparisons will need to be performed carefully to avoid misinterpretation of any findings.

OMB Comment A11:


We do not consider response rates for survey of parents a relevant comparison. We also do not consider a pilot of test of n=9 particularly useful in estimating a response rate.


IES Response to A11:

We believe that obtaining a response rate of at least 80% from the voluntary, non-funded respondents to the IHE Study is realistic. Westat has extensive successful experience obtaining high response rates from individuals, including professionals at all levels. An example of Westat’s success with voluntary respondents is the study of State and Local Implementation of IDEA for OSEP in 2005, where Westat led survey data collection for a sample of 4,343 school principals and obtained a response rate over 80%. What Westat has learned from these experiences is that the details of the data collection process matter when recruiting non-funded entities. These details include the use of senior project staff to conduct recruitment phone calls with each non-funded project director, the personalization of all contacts (e.g., letters sent from the Department), the use of a customized web survey that will provide features to encourage successful survey completion, and the provision of individualized survey technical assistance through a toll-free telephone line answered by a senior staff member.

Also, the literature shows that monetary incentives that are appropriate for the desired respondents can also further improve response rate. This literature is further discussed in IES’s Response to Comment A12.

OMB Comment A12:


The “going rate” across the federal government for a 1 hr interview in a cognitive lab, including travel time, parking expenses, etc, is $40. Therefore, we consider $50 much too high. Further, you indicate that the pilot test indicates that participants are eager, so we question whether an incentive is in fact required


IES Response to A12:

As pointed out by OMB, the “going rate” across the federal government for a 1 hour interview in a cognitive lab, including travel time, parking expenses, etc., is $40. However, in this evaluation we are recruiting the entire population of a specific group of individuals (i.e., non-funded applicants in FY06 and 07, who are not replaceable as they might be if we were working with a sample (e.g., those who participate in a cognitive lab). Obtaining an 80% response for this finite population is absolutely critical because a response less than 80 percent will not meet IES’s rigorous research standards, which may prevent release of the report.

Examinations of survey incentives have found that the higher the incentive, the higher the response rate. In a meta-analysis of 38 experimental and quasi-experimental studies that implemented some form of mail survey incentive in order to increase response rates, Church (1983) found a strong correlation (r = .45) between effect size and cash value of the incentive. Yu and Cooper (1983) conducted a comprehensive literature review of techniques used to increase response rates to questionnaires. They found a strong positive correlation (r = .61) between incentive value and increases in returns as well.

Regarding the amount of the proposed incentive for this evaluation, we note for comparison that the salary rate of an average faculty member can be conservatively estimated at $48 (based on an average annual salary of $80,000 for the 9-month academic year). In fact, this estimate may be low because many of the applicants are senior faculty with higher-than-average salaries. Our purpose, of course, is to provide an appropriate monetary incentive that will help us achieve a high response rate and high data quality, not to fully compensate respondents for their time. Nevertheless, an incentive that corresponds somewhat to the level of effort required should be the most effective (Kanuk and Berenson, 1975).

References

Rathbun, P., Boyle, K., Welsh, M., and Laughland, D. (1998), “The Effect of Prepaid Monetary Incentives on Mail Survey Response Rates and Response Quality,” paper presented at the Annual Conference of the American Association of Public Opinion Research, St. Louis, Missouri.

Church, A. H. (1993), “Estimating the Effect of Incentives n Mail Survey Response Rates: A Meta-Analysis,” Public Opinion Quarterly, 57, pp. 62-79.

Godwin, R. Kenneth (1979), “The Consequences of Large Monetary Incentives in Mail Surveys of Elites,” Public Opinion Quarterly, 43, pp. 378-387.

Kanuk, L. and Berenson, C. (1975), “Mail Surveys and Response Rates: A Literature Review,” Journal of Marketing Research, 12, pp.440-453.

Singer, E., Groves, R.M., and Corning, A.D. (1999), “Differential Incentives: Beliefs About Practices, Perceptions of Equity, and Effects on Survey Participation,” Public Opinion Quarterly, 63, pp. 251-260.

Smith, D.S., Pion, G., Tyler, N.C., Sindelar, P., and Rosenberg, M. (2001). “The Study of Special Education Leadership Personnel: With Particular Attention to the Professoriate”. Vanderbilt University, Nashville, TN, University of Florida at Gainesville, Gainesville, FL, and Johns Hopkins University, Baltimore, MD. Downloaded from http://www.cgu.edu/PDFFiles/IRIS-WEST/Final%20Report.pdf on September 18, 2009.

VanGeest, Jonathan B., Johnson, Timothy P., and Welch, Verna L. (2007), “Methodologies for Improving Response Rates in Surveys of Physicians,” Evaluation and the Health Professions, 30, 303-321.

Yu, J. and Cooper, H. (1983), “A Quantitative Review of Research Design Effects on Response Rates to Questionnaires,” Journal of Marketing Research, pp. 2036-2044.

File Typeapplication/msword
AuthorAuthorised User
Last Modified By#Administrator
File Modified2009-10-15
File Created2009-10-15

© 2024 OMB.report | Privacy Policy