ED Response to OMB Comments

Responses to OMB PDP Eval Questions 8 28 09b.docx

Evaluation of the Personnel Development to Improve Services and Results for Children with Disabilities Program

ED Response to OMB Comments

OMB: 1850-0869

Document [docx]
Download: docx | pdf

Responses to OMB Questions on 200903-1850-002


  1. Please explain why the Department decided to focus most of this evaluation on Center activities, when Supporting Statement A acknowledges that most of Personnel Development funds go toward fellowships.



The majority of the Personnel Development Program (PDP) evaluation data collection and analysis activities are focused on the Institutes of Higher Education (IHEs) that receive PDP funds and on the students at those IHEs who receive the stipends. A smaller proportion of data collection and analysis activities are focused on the National Centers.


The PDP evaluation consists of two studies: (1) a study of the 12 National Centers funded by the PDP from 2001 to 2008; and (2) a study of Courses-of-Study funded by the PDP through IHEs. These courses of study are designed to prepare personnel to serve children with disabilities and 60 percent of the funding received by IHEs is awarded to students in the form of stipends. Both the Centers and IHEs are being studied because each is intended to make a contribution to the field of special education. The Centers are intended to design, create, and disseminate products and services used by state education agencies, local education agencies, schools, teachers, and special education training programs to increase the evidence-based knowledge and practices in special education. The grants to IHEs are intended to increase the number of special educator teachers and services providers in the field as well as their quality.


For the Center study, ten products or services will be sampled from each of the 12 Centers (i.e., a total of 120 products or services) and reviewed by a panel of experts along the dimensions of quality, relevance, and usefulness.


For the IHE study, it is estimated that, on average, 2.2 changes to courses of study will be sampled (i.e., a total of 550 changes) and reviewed by a panel of experts along the dimensions of quality, relevance, and usefulness. In addition, the IHE study will also include a survey with questions pertaining to Course-of-Study characteristics as well student characteristics and stipends.

With respect to students and stipends, respondents will be asked to provide aggregate data as follows:


  • counts of new candidates in the course of study,

  • counts of full-time and part-time candidates,

  • counts of candidates with and without prior general education certification,

  • counts of candidates earning each of five categories of degree,

  • counts of candidates earning the credential on which the course of study is focused,

  • number of candidates who took and passed required exams and their average scores,

  • monetary support received (including stipends) specifically because of enrollment in the course of study, and

  • number of supported recipients and the mean, minimum, and maximum amounts they receive from the grant and from other sources.



  1. Response 10 of Supporting Statement A addresses any assurance of confidentiality. However it does not explicitly state who will see the raw results. Will the OSEP Center project directors see the results of the raw surveys? We are concerned that respondents will not be totally forthcoming if they think the results will be shared with their project directors. Some of these Centers are repeat grantees and will not want to jeopardize their chances for getting a future grant. Please be explicit on who will see the results.

The Center Study does not include a survey but rather includes an interview and inventory of Center products and services created through the grant. The responses on these will be handled by Westat, the prime contractor on the evaluation, and will not be seen by any OSEP staff, including project officers. The inventory will be used as the basis for sampling Center products and services that will be reviewed and rated by a panel of experts along the dimensions of quality, relevance, and usefulness. The aggregated results of these panel reviews, broken out by each of the Centers, will be included in the evaluation report.


With regards to the IHE study, the survey will collect data from funded and non-funded applicants for FY2006 and FY2007. Raw data will be handled confidentially by staff of Westat. We share the concern, as we describe in our response to the next question, that the sharing with the Department of Education of any data that might identify an individual respondent, or the appearance that such data might be shared, would both depress the response rate and inhibit fully candid responses. Thus, no raw data will be shared with OSEP project officers or any other Department staff, and no aggregate data that identifies the grantee or institution will be shared with OSEP project officers.


Survey respondents will be randomly assigned a 3-digit ID for completion of the survey. One item will request information about the respondent’s course of study, including the ED grant number, but no other personal information will be included in the survey or stored in the survey database. Data will only be shared outside of the immediate evaluation team through the construction by Westat of a separate restricted-use data file. Only the randomly assigned ID will remain in the restricted-use data file; all other applicant identifiers, such as the ED grant number, will have been removed in the construction of the file. Access to this restricted-use file will be available only to ED-authorized users who secure a restricted-use data license agreement with the Department. Westat staff will aggregate data for descriptive analyses, which will of course be shared with the Department and will ultimately be included in the evaluation’s final report.


  1. Section 183 is not the correct authorization for this collection; please remove all mention of section 183 from supporting statements, survey instruments and letters. Also, why does this survey make promises of confidentiality? There is no personally identifiable information to protect – the promise doesn’t seem appropriate. And even if it were, there is very specific legal language that we require for promises of confidentiality, and this has not been used. We recommend removing all language promising confidentiality (but certainly it is fine to include language stating that privacy will be maintained to the greatest extent permitted by law).


The data collection for this evaluation is authorized by the Individuals with Disabilities Education Act (IDEA) under Title I, Part D, Subpart 2, Section 663(c)(9), which allows for the Secretary to enter into contracts that assess personnel development for special education teachers and service providers; and Section 664(b)(2)(C), which allows the Secretary to delegate to IES responsibility to assess the implementation and effectiveness of personnel development activities. All references to Section 183 of ESRA have been removed from supporting statements, survey instruments and letters and will be replaced with the correct authorization information. (see revised Appendices A, B, D, and E)


No promise of confidentiality will be made to the National Centers because the evaluation requires that the report name individual Centers and pair the names with results from the Center inventory and the results from expert panel review. On the other hand, the survey of IHE applicants offers confidentiality because the data will be aggregated across IHE’s in the report.

In response to OMB’s concerns regarding the confidentiality statements in the letters and the data collection instruments, we propose two revised statements. For the Centers Study, the statement, which does not promise confidentiality (because the data for each Center will be reported separately in the evaluation report), will read as follows:


For this data collection, Westat will report results for each Center. Data about your Center will be available to the Department of Education and may appear in evaluation reports.”


For the IHE Study, we propose using the following statement, which is patterned after a statement agreed upon by OMB and NCEE for use by the ten Regional Education Laboratory contractors:


Westat will protect the confidentiality of all information collected for the study and will use it for evaluation purposes only. No information that identifies any study participant will be released, except as required by law. Information from participating institutions and respondents will be presented at aggregate levels in reports. All institution-level identifiable information will be kept in secured locations, and identifiers will be destroyed as soon as they are no longer required.”


  1. We ask that the research questions be expanded to include:


  • How do students receiving fellowships fare once they exit their program? Do the students they teach have different outcomes than those teachers who haven’t benefited from the Personnel Development program?


We are currently having discussions with OSERS about the possibility of using data from its Service Obligation Tracking System (SOTS). The data would address whether students who received stipends through the PDP work in the special education field upon program completion and for how long. The SOTS does not collect data on teacher performance.


At one point, NCEE considered including research questions in the PDP evaluation pertaining to how the PDP affects achievement of students receiving special education services. However, after the contractor consulted with experts in the fields of study design, measurement, and teacher preparation, we concluded that it would not be feasible to address these questions through the current evaluation.


This is because a rigorous study design that would allow causal inference (i.e., that the PDP affected student achievement) is not possible with this program. For example, random assignment to study conditions is not possible because the grants have already been awarded. An alternative study design (i.e., regression discontinuity) that would allow us to make legitimate causal inferences is also not feasible given that many of the non-funded applicants in FY06 and FY07 either received PDP funding before those years or have received it since.


Even if a rigorous study design were possible to determine the impacts of the program on student achievement, the sample size required to address the question requested by OMB would be prohibitive. According to our expert panel, the likelihood of finding statistically significant results on student achievement outcomes are small, given the multitude of variables that affect the performance of children. Therefore, an extremely large sample size would be required to meaningfully examine the effect of the PDP program on the achievement of special education students. . For the current evaluation, recruiting and administering tests to such an enormous sample of special education students would not be feasible either logistically or monetarily.


  • How do Principals, Administrators, and Parents rate those teachers coming out of Personnel Development programs?


Finding and tracking PDP program graduates and then collecting outcome data through surveys of administrators, principals, or parents would be costly and well beyond the budget of the current evaluation. These costs would be due to the geographic dispersion of the graduates, and the effort and resources required to track graduates over time. Furthermore, we have already proposed to collect data on teacher knowledge through a more cost-effective means, that is, teacher certification test scores that have already been administered.


  • Are teachers from Personnel Development programs more likely to teach in high-need schools? How long do they tend to remain in their field?


If we are able to obtain the SOTS data from OSERS, we could address the question regarding how long stipend recipients remain in the special education field up to several years after they complete the PDP-funded program. However, the SOTS data do not indicate whether these individuals work in high-need schools.


  1. We also ask that the Respondents be expanded to include Principals and State Administrators. We want to know whether these administrative personnel feel that teachers coming out of PD programs are well-equipped to meet the needs of special education students and if these teachers are better prepared than teachers not benefiting from the PD program. We also want to know if these Administrators use the materials and technical assistance that the National Centers have created.


Expanding the scope of the study to include surveys of principals and state administrators is not feasible. The cost involved with creating the surveys, developing a sampling frame, and recruiting and surveying an adequate number of principals and administrators (in order to achieve sufficient statistical power) would not be trivial and would be well beyond the funds that are available to conduct this evaluation. Furthermore, we have already proposed to collect data on teacher knowledge and teaching skills through a more cost-effective means, that is, teacher certification test scores.


Finally, comparing administrator ratings of teachers from PDP-funded versus non-PDP-funded programs, as requested by OMB, would not allow us to assess causality because any association between PDP funding and the ratings could be driven by any number of unknown third factors. Thus, this association from a descriptive evaluation could not be used to determine if the program affected teacher quality. We have consulted experts in the field about the possibilities of a study design (i.e., regression discontinuity) that would allow us to make legitimate causal inferences. However, use of this design is not feasible given that most of the non-funded applicants in FY06 and FY07 either received PDP funding before those years or have received it since.


  1. What criteria will the panel use in determining Center products/services quality, relevance, and usefulness? Please explain how the panelists will be selected, their expected qualifications, and the guidance they’ll receive.


To the extent possible, indicators of quality, relevance, and usefulness will be driven by language in the OSEP grant priorities and the state of the research in each Center’s focus area. Westat will select the members of the 12 National Centers Study expert review panels (three members per panel), for their expertise in the content areas of each Center (e.g., autism, early childhood special education, visual impairments). In addition, if a Center relies on a particular medium for distributing its materials (e.g., web-based course modules), we will include a panel member knowledgeable about that means of delivering information. Westat will recruit doctoral-level panelists from inside and outside academia. Panelists will receive in-person training on use of the rubrics and will have an opportunity to meet with the other panelists for the Center or Centers to which they have been assigned. The panels will review the indicators developed by ED and propose additional indicators for their Centers, as needed. Panelists will review products and services independently but will reconvene to discuss discrepancies in scoring.


  1. Instruments:

    1. Appendix A – can any of the information requested be gleaned from the annual reports each grantee submits? Or from their final self-evaluation? This seems true for some of the basic questions about fellowship amount, new or revised courses, enrollment/completion data, and funding allocation.


NCEE has considered using extant data for the evaluation of the PDP in an attempt both to keep the costs of the study to a minimum and to reduce the burden placed on respondents. As discussed in Section A.4 of the justification submitted to OMB, we will attempt to use data concerning student stipends (e.g., number of students funded by such stipends, the stipend amounts) that are being collected through by OSERS through their PDP Student Data Report. We are currently having discussions with at OSERS staff about the possibility of using these data in IES’s PDP evaluation. If we are able to use these data, the burden of the IHE survey could be reduced somewhat for the funded respondents.


However, several issues necessitate that we collect the majority of IHE study data from the survey that we submitted as Appendix A. First, the annual reports completed by grantees do not contain the level of detail required to adequately address many of the research questions. For example, annual reports contain basic count data on student completion rates but do not necessarily discuss all the reasons that students did not complete the courses of study. This level of detail, which would be collected through the survey, will provide valuable information for addressing evaluation questions and for program improvement. Second, the data found in grantees’ annual reports are often incomplete and not consistently reported across grantees; thus, the reports would not be a reliable data source and could not be aggregated. Third, because we are collecting data from both program-funded and non-program-funded applicants, it is crucial to use equivalent sources of data to assure that the data are comparable between the two groups. However, annual reports and other extant data exist only for the funded group. Finally, final self-evaluations cannot be used because these evaluations will not have been completed by the time we would need to collect the data.


    1. For question 22 of the combined funded and 25 of non-funded applicant survey, please ask if IHEs collect the outcomes of the students taught by graduates and whether graduates supervisors are satisfied with graduates’ preparation.

We will add two response options to the aforementioned questions. They will pertain to whether IHEs collect: (1) any outcome data on the students taught by graduates; and (2) data on satisfaction of schools administrators with graduates’ preparation.

  1. Is the survey mandatory for grantees and voluntary for non-funded institutions? This needs to be clarified throughout. It must be clearly stated when a survey is mandatory and when it is voluntary.


Participation in the IHE Study is mandatory for funded applicants and is voluntary for non-funded applicants. We recognize the need to identify the mandatory or voluntary nature of the data collection more clearly throughout study materials. We will address this issue in four ways.


First, separate recruitment letters will be sent from the Department to funded and non-funded applicants. Letters to funded applicants will include the statement, “All grantees under IDEA are required to participate in this data collection (20 U.S.C. 1221e-3 and 3474).” Letters to non-funded applicants will of course not include this statement, but rather will include the statement, “All funded and non-funded applicants to 2006 and 2007 PDP priorities focused on direct preparation of personnel are being asked to complete the survey. We are asking for your voluntary participation because you applied to at least one of these priorities in one or both of these years.”


Second, to ensure a high response rate, senior project staff from Westat and its subcontractors will place individual recruitment phone calls to each non-funded applicant after the letter is sent from the Department. During these calls, staff will explicitly state that completion of the IHE Survey is voluntary and will answer any questions or concerns that respondents may have about the study.


Third, separate letters also will be sent to funded and non-funded applicants containing survey log-in instructions. Funded applicant letters will include specific statements on the mandatory nature of the data collection as referenced above. Non-funded applicant letters will again provide a specific statement that completion of the survey is voluntary. And fourth, the separate forms of the IHE Survey that are prepared for funded and non-funded applicants will include the appropriate statement regarding mandatory or voluntary participation.


  1. What response rate is expected for the voluntary, non-funded respondents, and what evidence has been used to arrive at this opinion?

Westat expects a response rate of 80 percent for the voluntary, non-funded applicants responding to the IHE Survey. This estimate is based on experience with the pilot test, recent research, and previous similar studies conducted by Westat. In the pilot testing of the survey, we found the non-funded applicants eager to participate in the pilot test of the instruments. They likely have a vested interest in the program because they have previously been PDP grantees or because they will be applying for funding in the future. In addition, subject to OMB approval, Westat proposes a $50 payment for the non-funded respondents to encourage their participation (see response to Question 10 below).


Westat has extensive experience in achieving high response rates with voluntary respondents. For the Head Start Impact Study, Westat randomly assigned approximately 5,000 children to a Head Start group that had access to Head Start program services or to a control group that could enroll in available community non-Head Start services. During the first data collection period, Westat successfully interviewed 81 percent of the children’s parents including about 80 percent of the control group parents.


Westat’s recruitment and data collection procedures for the PDP Evaluation are designed to achieve maximum respondent participation. Senior project staff will conduct recruitment phone calls with each non-funded project director in order to personalize the recruitment request and establish rapport at the highest level. Participants will be informed that we are attempting to attain a census of all applicants to the program in FY2006 and 2007. Additionally, the use of a customized web survey will provide features that enhance response rates and encourage successful survey completion, including multiple browser usability, use of branching and skip patterns to simplify responding, and the provision of individualized survey technical assistance through a toll-free telephone line answered by a senior staff member. Westat also has extensive experience in respondent refusal conversion. While the survey is voluntary for non-funded applicants, if a completed survey is not received, the respondent will be contacted and encouraged to respond.


  1. Please clarify – who exactly gets the $50 payment for non-funded institutions? Is this addressed to the centers at large, or to individuals such as center directors? How was this amount chosen as an appropriate payment?


The topic of payments continues to be frequently examined by survey research practitioners with the evidence almost always in favor of using a monetary payment as a response inducement. These payments are associated with higher quality data, lower overall survey costs, increased response rates, and improved sample representation.1234


Because they are required to participate, project directors of funded projects will not be provided with a payment for completing the IHE survey. However, subject to OMB approval, project directors of non-funded applicants will be offered a $50 payment for completing the IHE Survey to the project director or to the person completing the survey at the project director’s discretion.


  1. Regarding exhibit B-2 in the Part B statement – what is a substantive “change” in a course, and how is that measured? How are non-substantive changes filtered out?

Respondents to the IHE Survey are asked to list courses that are “new” or “significantly modified” since the time of their PDP grant application. They are also asked to list other changes and are given examples of substantive changes in the instructions. There will, of course, be some variation in the levels of change respondents choose to report, but the respondents will be the initial judges of what changes are substantive. Once Westat has the data from all respondents, staff will be able to examine the nature and range of responses and make some informed judgments about what constitutes a substantive change in the context of all the changes reported. Because the quality and magnitude of changes made by the funded applicants will be assessed by the IHE Study review panels, Westat will, if necessary, filter out non-substantive changes both before sampling the changes that respondents will be asked to document and after receiving documentation associated with the sampled changes. This filtering will not be focused on the quality of the change, because a measure of quality is an important result of the review panels’ work. Instead, the filtering will be done to eliminate changes that are not at a level of magnitude or of sufficient importance to a project to constitute a substantive change. Such non-substantive changes need not be reviewed because, regardless of their quality, they will be of too little consequence to be relevant to the evaluation’s descriptive analyses.


  1. Where did the 1 hour per response burden estimate come from? Has this survey been tested to see how long it actually takes to complete?

In a January, 2009 pilot of the questionnaires conducted by Westat, eight respondents indicated it took them, on average, one hour to complete the survey.


  1. How is it known that the product and service inventories that are submitted by national centers are complete listings? What quality controls exist to check on this? Don’t the centers have an incentive to leave out poor quality items that might hurt their assessments?


Westat will take a number of steps to ensure that the products and services that centers list on the Inventory of Products and Services are complete. First, project directors for the Centers will be trained by phone before completing their inventories. This training will explain that the evaluation report will include the number of products and overall cost, in addition to ratings of quality, relevance, and usefulness. If Center directors choose to omit products, it will increase their cost per product, since all grant funds must be accounted for in the inventory. Second, in many cases (e.g., IRIS-1, IRIS-2, NCIPP), Center web sites provide information about the products/services available through the Center, and those sites will be reviewed prior to processing the inventories. Third, Westat will also review each Center’s Annual Performance Reports to determine whether products reported were excluded from the inventory.



1 Singer, E., Groves, R.M., and Corning, A.D. (1999), “Differential Incentives: Beliefs About Practices, Perceptions of Equity, and Effects on Survey Participation,” Public Opinion Quarterly, 63, pp. 251-260.

2 Baumgartner, R., Rathbun, P., Boyle, K., Welsh, M., and Laughland, D. (1998), “The Effect of Prepaid Monetary Incentives on Mail Survey Response Rates and Response Quality,” paper presented at the Annual Conference of the American Association of Public Opinion Research, St. Louis, Missouri.

3 Church, A. H. (1993), “Estimating the Effect of Incentives n Mail Survey Response Rates: A Meta-Analysis,” Public Opinion Quarterly, 57, pp. 62-79.

4 Yu, J. and Cooper, H. (1983), “A Quantitative Review of Research Design Effects on Response Rates to Questionnaires,” Journal of Marketing Research, pp. 2036-2044.



6


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleRESPONSES TO OMB QUESTIONS ON 200903-1850-002
AuthorJosh Brammer
File Modified0000-00-00
File Created2021-02-03

© 2024 OMB.report | Privacy Policy