Supporting Statement Part A

Supporting Statement Part A.doc

National Study on Alternative Assessments (NSAA) Teacher Survey

OMB: 1850-0860

Document [doc]
Download: doc | pdf


Supporting Statement for Paperwork Reduction Act Submission

Part A: Justification

1. Circumstances Making Collection of Information Necessary

The National Study on Alternate Assessments (NSAA), funded by the National Center for Special Education Research in the Institute of Education Sciences at the U.S. Department of Education, responds to a congressional mandate in Section 664(c) of the 2004 reauthorization of the Individuals with Disabilities Education Act (IDEA), which calls for a “study on ensuring accountability for students who are held to alternative achievement standards.” More specifically, this legislation requires “a national study or studies to examine (1) the criteria that States use to determine (A) eligibility for alternate assessments; and (B) the number and type of children who take those assessments and are held accountable to alternative achievement standards; (2) the validity and reliability of alternate assessment instruments and procedures; (3) the alignment of alternate assessments and alternative achievement standards to State academic content standards in reading, mathematics, and science; and (4) the use and effectiveness of alternate assessments in appropriately measuring student progress and outcomes specific to individualized instructional need.” (See IDEA Section 664(c) included with this submission.) NSAA is being designed and conducted by SRI International in partnership with the University of Minnesota’s National Center on Educational Outcomes (NCEO) and Policy Studies Associates (PSA) and in consultation with Dr. Margaret McLaughlin from the University of Maryland. The NSAA addressed the first three topics through previously approved OMB data collection activities that included analysis of state documents and a national telephone interview survey with state alternate assessment personnel to produce state and national profiles of state alternate assessments based on alternate achievement standards (AA-AAS).

The NSAA Teacher Survey data collection activity described in this OMB submission will primarily address the fourth topic, “the use and effectiveness of the alternate assessments in appropriately measuring student progress and outcomes specific to individualized instructional need.” This language does not call for an impact study that establishes a causal link between the use of alternate assessments and increments in student performance. Instead, “use and effectiveness” are to be studied in relation to “appropriately measuring student progress and outcomes.” There are several issues in designing this study, including how to define “effectiveness” in this context, how to determine if alternate assessments are measuring progress and outcomes “appropriately,” and how to study “individualized instructional need” in relation to “student progress and outcomes.” We have addressed these issues by structuring the data collection according to a logic model of standards-based reform (SBR), as discussed below, which provides a framework for interpreting the connections between standards, assessments, accountability, and student outcomes. The No Child Left Behind Act of 2001 (NCLB), under which alternate achievement standards were established, has the goal of all students achieving proficiency on academic standards, so this study will focus on the academic standards states have developed under NCLB.

The Individuals with Disabilities Education Act Amendments of 1997 (IDEA ’97) first directed states to develop, and to implement by 2000, alternate assessments as an option for students with disabilities who cannot participate in regular assessments, even with accommodations. In response, states developed a variety of approaches to the design and implementation of such assessments, including portfolios, rating scales, and performance events (Thompson & Thurlow, 2001). However, it was not always clear how such assessments would link to state academic content standards, meet standards for technical adequacy, or be incorporated into accountability reporting (Quenemoen, Rigney, & Thurlow, 2002). Subsequently, the No Child Left Behind Act of 2001 (NCLB) required states to implement statewide accountability systems for all public schools that are based on challenging state standards in reading, mathematics, and science and on annual testing of students. States must establish three levels of performance (basic, proficient, and advanced) on the grade-level assessments and set annual performance targets against which to measure adequate yearly progress (AYP) to ensure that all groups of students remain on a trajectory toward proficiency by 2014. AYP targets must be determined, met, and reported for specific subgroups of students, including those with disabilities and those who participate in alternate assessment systems. On December 9, 2003, the U.S. Department of Education published a regulation permitting states to develop alternate assessments based on alternate achievement standards for students with the most significant cognitive disabilities and to use these alternate assessments and standards to make adequate yearly progress (AYP) decisions within limits set by the regulation. Alternate achievement standards must be aligned with the State's academic content standards for the grades in which the students are enrolled (or the grade levels commensurate with student ages in un-graded programs).

States have varied in terms of approach, evidence collected, eligibility criteria, and technical characteristics, and have continued to be revised and improved over time. Previous NSAA data collection has documented the characteristics of alternate assessment systems at the state level. More in-depth information about the factors that affect the implementation and effectiveness of alternate assessments at the local level for students with significant cognitive disabilities is needed. The state survey activities for this submission—design, data collection, and analyses—are described in the following sections.

Conceptual Framework

The theoretical framework guiding this study is grounded in standards-based reform (SBR) and influenced by the findings of the Commission of Behavioral and Social Sciences and Education (Elmore & Rothman, 1999). SBR is a series of interrelated education reform initiatives designed to bring about changes in the basic operations of the public school system by affecting all components of the educational process (Resnick & Zurawsky, 2005). Many of the components of SBR can be found in NCLB, in its legislative antecedents (e.g. The Improving America's Schools Act of 1994), and the succession of federal policies that have extended the provisions of NCLB to include students with disabilities. A basic premise of SBR is that central authorities such as the state or the federal government require accountability while allowing curricular and pedagogical flexibility, and thus motivate schools to improve to meet the standards. By placing greater emphasis on academic achievement and accountability, SBR has shifted attention from the process of education to the outcomes (Geenan, Thurlow, & Ysseldyke, 1995; Goertz, 2001; McLaughlin & Thurlow, 2003).

The essential components of SBR are clear and challenging state academic content and achievement standards, state assessments aligned with these standards, and accountability for the academic achievement of all students, that lead to clear expectation and motivations to improve teaching and learning (Elmore & Rothman, 1999). State content and achievement standards establish goals for the educational system by describing what all students should know and be able to do. Districts, schools, and teachers then use these goals to inform curriculum and instruction and to guide professional development and other school activities, while teachers are expected to use the standards to inform classroom instruction. In this way, the system is coordinated in its efforts to promote student proficiency in valued content and skills. Results on state assessments are used to gauge school success. Thus, the accountability system leads to changes in classroom practice, which directly and positively influence student achievement.

During the past few years, the field of educational research has responded to the need for research in the area of SBR and students with significant cognitive disabilities. For example, work by NCEO, the National Alternate Assessment Center, and the New Hampshire Enhanced Assessment Initiative has greatly increased our knowledge base concerning the technical quality of alternate assessments and the learning characteristics and instructional needs of these students. Research by Browder, Fallin, Davis, and Karvonen (2003) identified six factors reflective of SBR, which they hypothesized may influence performance on alternate assessments: (1) technical quality of alternate assessments, (2) student characteristics, (3) resources the teacher has for doing the alternate assessment, (4) access to the general curriculum, (5) use of data collection systems, and (6) instructional effectiveness. Subsequent research by Karvonen, Flowers, Browder, Wakeman, and Algozzine (2006) generally supported the conceptual model proposed by Browder and colleagues, positing the following seven general influences: (1) resources, (2) teacher characteristics, (3) curriculum, (4) instructional effectiveness, (5) quality of student data, (6) student characteristics, and (7) characteristics of the state’s assessment and accountability system.

The extent to which the tenets of SBR theory are being effectively applied for students with significant cognitive disabilities is a compelling question and one that is guiding the NSAA state survey. Below, we present a working theory of SBR to provide a framework for the NSAA state survey. This theory of action was developed from the one articulated by the Commission on Behavioral and Social Sciences and Education, a review of the literature regarding SBR, and the recent work on applying SBR to students with significant cognitive disabilities (Browder et al., 2003; Karvonen et al., 2006; Marion & Pellegrino, 2006; Quenemoen et al., 2002).

Exhibit 1 provides a framework for understanding how the major components of SBR may apply to students. The Matrix of Standards-Based Reform and Key Concepts (see Key Concepts Matrix included with this submission) further articulates the links between the state survey items and the subcomponents of the model for students with significant cognitive disabilities and guided the development of the NSAA state survey. Inherent in the model is the need for each level of the education system—state, district, school, and classroom—to respond appropriately.

Exhibit 1

A
Theory of Action: SBR and Students With Significant Cognitive Disabilities



Standards, assessments, accountability (Box 1)

For the purposes of this study, the first box of the SBR model will be assumed. That is, the study will only include states in which alternate assessments have been stable for at least three years and state assessment systems have received approval under the NCLB Peer Review Process. The major reason for this decision is to prevent the results of the study from simply reflecting the turbulence associated with changing and implementing new alternate assessments, as many states are currently doing.

Clear expectations and motivation (Box 2)

Stakeholder understanding and support. For SBR to have positive effects on students and teachers, all stakeholders, especially district and school leaders, must respond in supportive and constructive ways. For example, the instructional leadership of the school principal (Borman, Hewes, Overman, & Brown, 2003; Nettles & Herrington, 2007; Waters, Marzano, & McNulty, 2003) and the amount of time allocated to instruction by district and school leadership (Gullatt, 2006; Mirrel, 1994) are associated with student academic achievement. It is essential for district and school leaders to understand what the state academic content and alternate academic achievement standards mean for students with significant cognitive disabilities and their teachers and that they understand the challenges for front-line special educators and the nature of the instructional change required within schools and classrooms (Karvonen et al., 2006; Wakeman, Browder, Flowers, & Ahlgrim-Delzell, 2006).

The NSAA state survey will gather information from special education teachers regarding their understanding of SBR and their perceptions of stakeholder understanding of and support for the academic content and alternate academic achievement standards.

Guideposts for improvement. Assessments serve a number of functions, such as guiding instructional decisions, monitoring progress, and holding schools and districts accountable (Elmore & Rothman, 1999). When an assessment is well aligned, covering the length and breadth of the state’s academic content standards, it can provide valuable information at the school and district levels and also can be used in conjunction with classroom assessments to provide information about individual students’ strengths and weaknesses to guide instruction.

The NSAA state survey will gather information on the extent to which teachers perceive results from the alternate assessment to be a true reflection of student knowledge and skills in academic content and whether they use results from the alternate assessment to make changes in classroom instruction in academic content.

Consequences: Rewards, interventions, and sanctions. SBR requires that schools and local districts be accountable for results and for ensuring that student learning improves. According to the Commission on Behavioral and Social Sciences and Education (Elmore & Rothman, 1999), accountability creates an incentive for teachers and administrators to improve student results because there are consequences for schools and districts based on student performance. Under NCLB, school systems are rewarded when all students and all significant subgroups meet or exceed annual measurable objectives in reading/English language arts and mathematics. Conversely, if they do not meet or exceed annual measurable objectives, a series of interventions and sanctions are implemented. Karvonen et al. (2006) and Flowers, Ahlgrim-Delzell, Browder, and Spooner (2005) reported that teachers in states that had included alternate assessment scores in their accountability systems before being required to do so by NCLB were more invested in the process and identified more benefits for students in terms of progress and access to the curriculum than states that did not include alternate assessment scores in their accountability systems.

The NSAA state survey will gather information from teachers on the perceptions of the consequences to themselves or the school and district arising from alternate assessment results and on their perceptions of the usefulness of including alternate assessment results in school and district accountability systems.

Professional capacity and resources (Box 3)

Teacher capacity to teach academic content standards. Elmore & Rothman (1999) questioned the premise that holding schools accountable for results would in and of itself provide the motivation to improve results. The Commission articulated two concerns. First, it questioned the premise that the field of education understands how to educate all children to meet high academic standards. Second, it questioned whether teachers indeed had access to high-quality professional development focused on enhancing their capability to teach the state’s academic content standards. In recognition of these concerns, the Commission developed an expanded theory that placed the focus on teaching and learning and highlighted, in particular, the need to build the capacity of teachers to deliver high-quality instruction in the state’s academic content standards. For the teachers of students with significant cognitive disabilities, this is one of the most critical issues.

The NSAA state survey will gather information from special education teachers on their perceptions of the quality and quantity of training they receive to instruct and assess students with significant cognitive disabilities in academic content areas of reading/English language arts, mathematics, and science.

Provision of instructional materials. The educational experience of students with significant cognitive disabilities traditionally has been very individualized and based on functional or practical curricula; in many instances, teacher resources and materials are geared to the individual student. It is therefore important for teachers of students with significant cognitive disabilities to have access to instructional materials, textbooks, equipment, and other resources related to the academic content specified in standards that they are expected to teach and that students are expected to learn (Browder et al., 2003).

The NSAA state survey will gather information from special education teachers on their perceptions of the availability of standards-based instructional materials for their use.

Opportunity to learn academic content (Box 4)

A key component of the standards-based theory of action is the creation of equity across schools and classrooms by providing all students, regardless of where they reside or the schools they attend, with access to the same content standards as other students (Resnick & Zurawski, 2005). Inherent in this is the expectation that classroom practice will change in an SBR system because districts and schools will create their own curriculum and instructional programs to provide all their students with an opportunity to learn the state’s content standards.

The NSAA state survey will gather information on teacher perception of the opportunity provided for students with significant cognitive disabilities to learn academic content.

Improvement in student performance (Box 5)

The theory of action behind SBR holds that when its components work together, the result will be higher levels of learning for all students (Elmore & Rothman, 1999). As Marion and Pellegrino (2006) and others (Quenemoen et al., 2002) point out, both NCLB and IDEA have moved alternate assessments—and therefore students with significant cognitive
disabilities—firmly into the world of standards-based accountability. In addition, evidence is growing that students with significant cognitive disabilities are benefiting from this move and that they not only are learning but also are learning at higher levels than previously thought possible (Marion & Pellegrino, 2006).

The NSAA will not directly analyze trends in student performance on alternate assessments for several reasons. First, no data on student performance before the introduction of the alternate assessments are available—the alternate assessments are both the intervention and the source of data. Second, alternate assessments in the states under study will have been stable for approximately 3 years, which is too brief an interval to allow confidence that any effects would be discernible in student performance data. Finally, it is methodologically impossible to distinguish the effects of alternate assessments from other factors that might influence student performance.

To date, the NSAA is the only in-depth and comprehensive national study of alternate assessments. Members of Congress, Department of Education program and evaluation staff, state and local policymakers, researchers, and practitioners need the information that will be compiled in this study to help ensure that this and future federal programs have the intended effect of supporting students with disabilities, including those with significant cognitive disabilities, to have the same opportunity to achieve high standards and be held to the same high expectations as all other students in each state.

2. Use of Information

The data collected for the state survey will be used to gain a better understanding of the effect of implementing AA-AAS for students with significant cognitive disabilities on the preparation of teachers to instruct and assess these students and on the students’ instructional experiences. The state survey will examine teacher perceptions of alternate assessments; for example: Do stakeholders understand and support the alternate assessment system? Are teachers equipped to teach academic content standards? Do results provide guidance for classroom instruction and resources? Are aligned instructional materials focused on academic content standards, are results of the alternate assessment transparent? Are there consequences and interventions based on alternate assessment results? Do teachers perceive that all students are provided the opportunity to learn academic content? Are improvements in student performance observed?

The NSAA results of the state survey will be used by

  • U.S. Department of Education (ED) staff to disseminate findings of the study to federal, state, and local policymakers and to researchers and educators;

  • Congress (the Health, Education, Labor, and Pensions Committee of the Senate and the Committee on Education and the Workforce of the House of Representatives) to inform future legislation for promoting access and accountability for students with significant cognitive disabilities; and

  • Researchers, who may use the data to inform future studies on alternate
    assessments—their technical adequacy, alignment, accountability, and implementation.

3. Use of Information Technology

During the state survey period, a toll-free telephone number and an e-mail address will be available to permit respondents to contact the contractor with questions or requests for assistance. The telephone number and e-mail address will be included on all data collection instruments, both in hard copy and in the online version. In addition, a data tracking system will be developed and hosted by IES to ensure data security. The system will be a real-time web-based tool and will serve as the primary communication tool between NSAA study staff and as the host for the online version of the survey. The data tracking system will be able to generate response rate reports.

4. Efforts to Identify Duplication

This data collection activity is one of ED’s primary efforts to evaluate AA-AAS, including programmatic outcomes associated with the implementation of these assessments and the quality of standards and accountability systems. The contractor is working to minimize the potential burden on participating states by working with ED to collect from states only data that are not available from secondary sources or are not being collected by other research studies supported by the federal government. The NSAA has established collaborative relationships with other research and technical assistance projects to avoid duplication of efforts and maximize the shared use of data.

5. Methods to Minimize Burden on Small Entities

No small businesses or entities will be involved as respondents.

6. Consequences If Information Is Not Collected or Is Collected Less Frequently

Failure to collect this information will prevent Congress and ED from evaluating important aspects of the quality of alternate assessments, as mandated under IDEA. The study will be collecting information that has not been systematically acquired and analyzed by other data collection efforts for alternate assessments. This information will be collected only once.

7. Special Circumstances

None of the special circumstances listed apply to this data collection.

8. Federal Register Comments and Persons Consulted Outside the Agency

The 60-day comment period was published for public comment and no comments were received. In addition, throughout this study the contractor has drawn on the experience and expertise of a technical working group (TWG) that provides a diverse range of experience and perspectives, including representatives from the school district and state levels, as well as researchers with expertise in relevant methodological and content areas. The members of this group and their affiliations are listed in exhibit 2. The full TWG met on April 23, 2007, to discuss the design of the state survey. A subset of the full membership (indicated with *) was involved in the refinement of the state survey activities and met most recently as a group on January 7, 2008, to provide feedback on the NSAA state survey; that subset of the membership also provided ongoing consultation throughout March and April 2008.

Exhibit 2
Technical Group Working Membership: April 23, 2007


Member

Affiliation

Interest & Expertise

Patricia Almond

University of Oregon

Sampling methodology, large-scale assessments, quantitative methodologies, alternate assessment methodologies, students with significant cognitive disabilities, special education policy research

*Diane Browder

University of North Carolina

Large-scale assessments, alternate assessment methodologies, students with significant cognitive disabilities, special education policy research

Stephen Elliott

Vanderbilt University

Large-scale assessments, alternate assessment methodologies, psychometrics, special education policy research

Janet Filbin

Jefferson County School District, Colorado

Educational document analysis, large-scale assessments, alternate assessment methodologies, students with significant cognitive disabilities

Margaret Goertz

University of Pennsylvania

Educational document analysis, case study methodology, special education policy research, accountability

*Jacqui Kearns

University of Kentucky

Case study methodology, large-scale assessments, alternate assessment methodologies, students with significant cognitive disabilities, special education policy research

*Scott Marion

The National Center for the Improvement of Educational Assessment (NCIEA)

Psychometrics, accountability, alternate assessment

Kevin McGrew

Institute for Applied Psychometrics

Sampling methodology, large-scale assessments, quantitative methodologies, alternate assessment methodologies, psychometrics, students with significant cognitive disabilities, special education policy research

*Gerald Tindal

University of Oregon

Sampling methodology, large-scale assessments, quantitative methodologies, alternate assessment methodologies, students with significant cognitive disabilities, special education policy research

Elizabeth Towles-Reeves

University of Kentucky

NAAC

Large-scale assessments, alternate assessment methodologies, students with significant cognitive disabilities, special education policy research

Shawnee Wakeman

University of North Carolina

Large-scale assessments, alternate assessment methodologies, students with significant cognitive disabilities, special education policy research

*Dan Wiener

Massachusetts State Department of Education

Large-scale assessments, alternate assessment methodologies, special education policy research, state implementation and accountability

* A subset of the full membership who were involved in the refinement of the state survey activities and met on January 7, 2008.

9. Respondent Payments or Gifts

Attaining high response rates is extremely important to the validity and representativeness of the data the NSAA will collect with this survey. Therefore, NSAA proposes to attach a $5 bill to the initial mailing as a token incentive and to send a check for $35 to each respondent after receiving his/her completed questionnaire. This approach to incentives is supported by research, and the total incentive of $40 is consistent with guidelines for the use of incentives developed by IES for evaluations conducted by the National Center for Education Evaluation and Regional Assistance (Institute of Education Sciences, 2005).

Considerable research exists that shows that token incentives given with the request to complete a survey consistently improve response rates (Church, 1993; Dillman, 2000; James & Bolstein, 1990, 1992). According to Dillman, “The providing of a tangible incentive, even a token one, is effective because it evokes a sense of obligation which can easily be discharged by returning the completed survey” (p. 16). Research has indicated that the practice of sending a token incentive with the request (up front) is more likely to generate a response than promising a greater incentive once the survey is completed and returned by the respondent (Dillman, 2000).

The success of this incentive plan will be enhanced by the fact that all the respondents are teachers. Unlike principals and other administrators, who often have assistants and secretarial staff who open and process their mail, teachers generally open their own mail. This means that the enclosed incentive money will go directly to them and will have a direct impact on the likelihood of their completing and returning the survey.

A meta-analysis of research on the impact of monetary incentives on response rates concluded that “there seems to be little question that including monetary incentives increases response rates” (Boser & Clark, 1995, p. 9). The efficacy of including monetary incentives vs. checks, other types of incentives, or no incentives has been supported by much of the literature (Collins, Ellickson, Heys, & McCaffrey, 2000; Dillihunt, 1984; Green & Hutchinson, 1996; Hopkins & Gullickson, 1989; Wilde, 1988). For example, a meta-analysis of 62 studies conducted by Hopkins and Gullickson (1992) found that including monetary incentives increased response rates, on average, by 19 percent. Larger monetary incentives usually lead to higher response rates, but not necessarily in proportion to the additional cost to the study. For example, Collins and Ellickson (2000) found that the response to a mail survey increased by 7 percent when the incentive amount increased by 25 percent.

Beyond the amount of incentives is the issue of the timing the incentive. Research findings from multiple studies support the contention that promised incentives produce lower response rates than enclosed incentives (Boser & Clark, 1995; Green & Hutchinson, 1996; Hopkins & Gullickson, 1989; Wilde, 1988). For example, Hopkins and Gullickson (1989, p. 4) found that “Enclosing a gratuity increased the absolute response rate by almost three times as much as did the promised gratuity…This occurred in spite of the fact that the promised gratuities tended to be larger.”

In 2005, in response to a request from OMB, IES developed guidelines for the use of response incentives by the National Center for Education Evaluation and Regional Assistance (NCEE), and these guidelines are believed to be consistent with the proposed NSAA survey. Two considerations set forth in the NCEE guidance are of particular relevance: (1) The target population of the NSAA survey is relatively small—teachers who administer AA-AAS. (2) The NSAA survey places a demand on the respondents in terms of both time and expertise. The survey includes an extensive set of items on the implementation of alternate assessments and a fine-grained analysis of the enacted curriculum for a student with a significant cognitive disability and how that curriculum aligns with general standards.

The NCEE guidelines include the following table of proposed incentives and payments based on the type of data collection or activity, the time required to provide the responses, and the level of burden involved (exhibit 3).


Exhibit 3
NCEE Proposed Guidelines for Incentives and Payments1

Response Type

Level of Incentive


Low Burden

Medium Burden

High Burden

Teacher assessment

$25

$50

$100

Teacher rating of students

$3 per student

$5 per student

$10 per student

Parent or student survey/interview

$15

$25

$50

Student assessment

$50

$75

$100

Teacher or principal survey

$10

$20

$30



1For surveys, low burden = 10 minute survey of basic background, classroom or school characteristics; medium burden = 20 minute survey of classroom or parental practice or school environment; high burden = 30 minute survey of detailed information on instructional practice, school-level interventions, or parent/student histories and experiences. For teacher assessments, low burden = classroom observations, medium burden = 30 minute survey of teacher knowledge and skills, high burden = formal assessment of teacher knowledge and skills with normed test. For student assessments, low burden = individually scheduled assessment in school; medium burden = student must travel to test administration site; high burden = student and parent must travel to test administration site.



The NSAA teacher survey consists of two parts. The first part surveys teachers’ experiences and perceptions and requires approximately 30 minutes to complete. Thus, this part of the survey would be considered “high burden” and an incentive of $30 would be consistent with the NCEE guidelines. The second part of the survey involves identifying a “target student” and completing a rating of the student and his/her program. This part requires an estimated 25 minutes to complete, which might qualify it as a high-burden teacher rating of a student, justifying an additional incentive of $10 according to the NCEE guidelines, resulting in a total incentive of $40.

OMB (2007) has expressed concerns about the use of response incentives in federal information collections. However, the response incentives are necessary and justified for the NSAA teacher survey for reasons that are recognized in the 2007 guidance: high burden on the respondent, improved coverage of a specialized population (teachers of students with significant cognitive disabilities), and improved data quality associated with higher response rates.

Exhibit 4 shows the cost of proposed incentive program.

Exhibit 4
Maximum Incentive Costs for State Survey Activities

Activity

Incentive

Number of Surveys per State

Number of States

Total Number of Surveys

Total Cost

Screening Questionnaire followed by the NSAA State Survey of Teachers

$5 cash

270

3

810

$4,050

$35 check

200

(approx. 75% of initial sample will meet screening criteria)

3

600

$21,000

Total




1,410

$25,050

10. Assurances of Confidentiality

Respondents are assured that confidentiality will be maintained, except as required by law. The following statement concerning confidentiality will be included in the letters to respondents:

The collection of information in this study is authorized by Public Law 108-446, Section 664(c). Participation is voluntary. Your responses are protected from disclosure by federal statute (PL 107-279 Title I, Part C, Sec. 183). All responses that relate to or describe identifiable characteristics of individuals may be used only for statistical purposes and may not be disclosed, or used, in identifiable form for any other purpose, unless otherwise compelled by law. Data will be combined to produce statistical reports. No individual data that links your name, address, telephone number, or identification number with your responses will be included in the statistical reports.

The design of the study addresses state and local concerns regarding the Family Educational Rights and Privacy Act (FERPA) and operates in accordance with the Privacy Act of 1974, as amended, (5 U.S.C. 552a). NSAA data are gathered exclusively for statistical and research purposes, without identifying individuals. Specific steps to help preserve confidentiality are discussed below.

ED and its contractors are dedicated to maintaining the confidentiality of information on human subjects and sensitive data. Responses to this data collection will be used only for statistical purposes. The reports prepared for this study will summarize findings across the sample and will not associate responses with a specific district or individual. We will not provide information that identifies respondents to anyone outside the study team, except as required by law. The contractors recognize the following minimum rights of every subject in the study: (1) the right to accurate representation of the right to privacy, (2) the right to informed consent, and (3) the right to refuse participation at any point during the study. States will be assured of confidentiality to the extent allowed by law in the initial invitation to participate in the study, and this assurance will be reiterated to specific respondents at the time data collection begins. The NSAA has established a set of standards and procedures to safeguard the privacy of participants and the security of data as they are collected, processed, stored, and reported. These standards and procedures are summarized below.

  • Project team members will be educated about the confidentiality assurances given to respondents and about the sensitive nature of materials and data to be handled. Each person assigned to the study will be cautioned not to discuss confidential data and will be required to sign a written statement attesting to his or her understanding of the significance of this requirement.

  • Participants will be informed of the purposes of the data collection and the uses that may be made of the data collected.

  • Access to the database will be limited to authorized project members only; no others will be authorized such access. Multilevel user codes will be used, and entry passwords will be changed frequently.

  • The online survey will be hosted on a secure website that meets the FISMA federal security requirements.

  • All completed hard-copy surveys and online survey data will be stored in secure areas accessible only to authorized staff members. Computer-generated printouts containing identifiable data will be maintained under the same conditions.

  • As required, data tapes or disks containing sensitive data will be degaussed before being reused.

  • All basic computer files will be duplicated on backup disks to allow for file restoration in the event of unrecoverable loss of the original data. These backup files will be stored under secure conditions in an area separate from the location of the original data.

In addition, SRI maintains its own Institutional Review Board (IRB). All proposals for studies in which human subjects might be used are reviewed by SRI’s Human Subjects Committee, appointed by the President and Chief Executive Officer. For consideration by the reviewing committee, proposals must include information on the nature of the research and its purpose; anticipated results; the subjects involved and any risks to subjects, including sources of substantial stress or discomfort; and the safeguards to be taken against any risks described. The NSAA has received IRB approval from SRI’s Human Subjects Committee.

11. Questions of a Sensitive Nature

No questions of a sensitive nature will be included in the state survey of teachers who administer alternate assessments based on alternate achievement standards.

12. Estimate of Hour Burden

The estimates in exhibit 5 reflect the burden for participation in the state survey activities.

Exhibit 5
Estimated Burden for State Survey Activities

Activity

No. of Participants

No. of Hours per Activity

Total No. of Hours

Estimated Burden

State Survey Screening Questionnaire

810

10 minutes (0.16 hr)

135

$4,800

State Survey of Teachers




600

(Approx 75% of initial sample of teachers will meet the screening criteria to participate in full survey.)

1 hour


600


$24,000

Total



735

$28,800

13. Estimate of Cost Burden to Respondents

No additional respondent costs are associated with this data collection other than the hourly burden estimated in item 12.

14. Estimate of Annual Costs to the Federal Government

The annual costs to the federal government for the state survey activities are as follows.

Fiscal year 2008

$381,915

Fiscal year 2009

$659,039

Fiscal year 2010

$91,766

Total

$1,132,720

These costs support the design of the survey, development of the instrument, OMB clearance, and development of the sample in FY 2008; recruitment of the states and respondents, programming the web version of the survey, collecting data, and drafting the report in FY 2009; and finalizing the report and preparing data tables and documentation in FY 2010.

15. Change in Annual Reporting Burden

This request is for a new information collection.

16. Plans for Tabulation and Publication of Results

NSAA will present and analyze the findings of the data from the state survey for three states in a written report according to the schedule in exhibit 6. Additionally, NSAA will produce and provide ED with state survey data, data tables, and data documentation on CD-ROM. (See the Schedule of Tasks and Deliverables that is included in this submission package.)

Exhibit 6
Schedule for Dissemination of State Survey Results

Activity/Deliverable

Due Date

Report(s)

First draft

Final version


7/31/09

9/30/09

Data, tables, and documentation on CD-ROM

3/31/10

Data sources

The study researchers will analyze quantitative data gathered through the NSAA state survey (see NSAA State Survey included in this submission) for each state. The survey consists of items from two existing instruments, Parts 1 and 2 of the Curriculum Indicator Survey (CIS) and the entire Learner Characteristics Inventory (LCI), with additional items created by the NSAA team. Team members mapped the CIS and the LCI items to the key concepts (See Key Concepts Matrix included with this submission) underlying the SBR model for students with significant cognitive disabilities. The NSAA identified concepts that were not addressed in the SBR model by Part 1 or 2 of the CIS or the LCI and developed additional questions to create the full NSAA state survey. The individual components of the survey are not distinguishable in either the hard-copy or online version to streamline presentation. The source of each item is indicated on the survey provided in this submission to facilitate understanding of the relationships of the survey items to components of the SBR model, but will not be provided on the version teachers complete.

The CIS was developed by researchers at the National Alternate Assessment Center (NAAC) with experts in occupational therapy, physical therapy, speech/language therapy, deaf-blindness, reading, mathematics, and special education. The CIS consists of five parts. Part 1 asks for background information on the teacher, his or her caseload, and instructional influences in each academic subject area. Part 2 asks the teacher to provide information on the types of students on his/her caseload, based on the students’ level of symbolic communication. Parts 3–5 measure enacted curriculum for a target student but are not included in the NSAA survey for reasons of burden. Preliminary investigation has been conducted on the validity of the CIS (Karvonen, Flowers, Browder, & Wakeman, 2007 & National Alternate Assessment Center, 2007) (See LCI and CIS Validation included in this submission).

The LCI also was developed by researchers at the NAAC with experts in occupational therapy, physical therapy, speech/language therapy, deaf-blindness, reading, mathematics, and special education. The purpose of the LCI is to describe the population of students who take AA-AAS. Teachers are asked to rate where a student would rank on a continuum from low to high, with high representing more complex abilities, in the following areas: expressive language, receptive language, vision, hearing, motor skills, engagement, health issues/attendance, reading, mathematics, and use of augmentative communication systems. The instrument includes 10 questions, 9 that address the continuum of skills and 1 that is a dichotomous variable regarding the student’s use of adaptive technology. The LCI went through an expert validation and piloting (Towles-Reeves, Kearns, Kleinert, & Kleirner, in press) (See LCI and CIS Validation included in this submission). The NSAA TWG recommended the use of the LCI as a component of the NSAA state survey.

Survey analyses

The analysis of the survey data will be driven by the conceptual framework provided by the standards-based reform model underlying the survey instrument described earlier. It is intended to describe the characteristics of and the relationships between the model components of expectations and motivation, professional capacity and resources, and opportunity to learn. Using conventional frequency distributions, cross-tabulations, and measures of central tendency, the first analytic purpose will be to describe the distributions of the various variables within each component of the model. Within the model component of expectations and motivation, this includes descriptive analyses of understanding of and support for academic content and achievement standards (4 items), stakeholder understanding and support (5 items), clear expectations (4 items), results of alternate assessments guiding instruction (2 items), and consequences and motivation (3 items). Within the model component of professional capacity and resources, descriptive analyses will be conducted in the areas of professional capacity (15 items) and resources (3 items). In addition, descriptive analyses will be conducted on teacher characteristics (7 items) and of target students (15 items). For each variable, particular attention will be paid to the shape of these distributions, evidence of skew, unusually high-frequency responses, and the number and type of outliers. Across variables within each component, the degree to which distributions follow a pattern or patterns, as well as instances of variables that depart from observed patterns will be evaluated. This will enable descriptions of the model components in terms of commonalities and divergence across variables.

A second analytic strategy is to test the association of variables both within and across model components. Several types of associations will be examined, including the relationships among the three model components in question. Simple cross-tabulations of variables will be conducted using variables associated with specific components. For example, variables pertaining to results of alternate assessments guiding instruction (2 items) will be cross-tabulated with those from professional capacity (15 items). As with within-model component analyses, the cross-tabulations across model components will allow assessment of any patterns that might exist, as well as variables or constructs that depart from the primary pattern.

In addition to describing the survey data in light of the standards-based reform model, a number of specific types of comparisons to test for differences in specific variables and/or model components as a whole will be made. There are several individual student differences that call for such comparisons, including level of symbolic communication, primary disability category, and language use. For example, cross-tabulations of various components of the standards-based reform model comparing results for students with no symbolic communication with students who use symbolic communication. In addition, comparisons of model components based on several types of teacher characteristics and responses, such as years of experience and capacity to implement alternate assessments will be conducted. Depending on the measurement level of the variable, tests of statistical significance via chi-square, t, or F tests will be conducted. Because this process will result in a sizable number of comparisons, the Benjamini-Yekutieli method (Benjamini-Yekutieli, 2001) of multiple comparisons for controlling false discovery rates under conditions of dependence will be used.

Weighting survey results. The survey data will be weighted both to estimate the number of teachers within a given state who have given alternate assessments based on alternate achievement standards and to estimate the number of students taking such assessments in the state. For teachers, a representative random sample of 270 teachers will be selected for each state. It is estimated that 200 teachers per state will have a current caseload or classroom and teach at least one student with significant cognitive disabilities who takes an alternate assessment based on alternate achievement standards and, therefore, will be eligible to complete the survey. With an 80 percent response rate, we expect at least 160 respondents per state to complete the survey. This will allow for a standard error for a dichotomous variable (associated with the teacher, such as teacher background) of not more than 0.04. For students, each teacher will be asked the number of students for whom the teacher will conduct an alternate assessment based on alternate achievement standards, and will be provided with instructions to randomly select one of those students. The teacher will then be asked a series of questions about that student. These students will be weighted to represent all students in the state who received an alternate assessment based on alternate achievement standards. If we assume that the number of eligible students per teacher is uniformly distributed from 1 to 10, then the standard error for a dichotomous variable (associated with students, such as the student’s ability level) will be no more than 0.05.

17. OMB Expiration Date

All data collection instruments will include the OMB expiration date.

18. Exceptions to Certification Statement

No exceptions are requested.

19

File Typeapplication/msword
File TitleAugust 8, 2008
Authorrjones
Last Modified Bydavid.malouf
File Modified2008-10-23
File Created2008-08-11

© 2024 OMB.report | Privacy Policy