Download:
pdf |
pdfApproval Form for DOI Programmatic Clearance for Customer Satisfaction Surveys (OMB
Control Number 1040-0001, Expiration Date: March 31, 2012).
U.S. Department of the Interior
PPA Tracking Number: (for PPA use only)
Office of Policy Analysis (PPA)
CSS-4
Date Submitted to
PPA:
1.
2.
3.
02/25/2010
Survey Title: Parent Satisfaction Survey – Special Education
Bureau:
Bureau of Indian Education, Division of Performance and Accountability
Abstract: (not to exceed 150 words)
The Parent Satisfaction Survey – Special Education is for parents of students with disabilities
enrolled in Bureau of Indian Education (BIE) funded schools. The survey is designed to gather
information about school interactions with and support of parents to better meet the educational
needs of their students with disabilities. This twenty-five (25) question customer satisfaction
survey (Developed by the National Center for Special Education Accountability Monitoring
(NCSEAM) as part of a larger survey) has been statistically validated as sufficient to meet the
required reporting to the Department of Education’s Office of Special Education Programs
(OSEP).
Recognized parameters for survey development (Rasch) were used in the survey development,
including a national validation study. Standards were set and an item bank of questions
calibrated. Native American parents were a part of the validation study which makes this
survey appropriate for parents with students in BIE funded schools. BIE is required to include
baseline parent survey data in its February 2012 Annual Performance Report (APR) to the
Department of Education’s OSEP. (20 U.S.C. 1416(a)(3)(A))
4.Bureau/Office Point of Contact Information
First Name:
Donald
Last Name: Griffin
Title: Education Specialist
Bureau/Office: BIE/Division
of Performance and Accountability
Street Address:1011 Indian School Road NW, Suite 332
City: Albuquerque
State: NM
Zip code: 87104
Phone: (505) 563-5384
Fax: (505) 563-5281
Email: [email protected]
5. Principal Investigator (PI) Information
First Name: Patricia
Last Name: Abeyta
Title: Education
Research Analyst
Bureau/Office: BIE/Division of Performance and Accountability
Address: 1011 Indian School Road NW, Suite 332
City: Albuquerque
State: NM
Zip code: 87104
Phone: (505) 563-5272
Fax: (505) 563-5281
Email: [email protected]
6.
Name of Program or Office
Conducting Survey:
7.
Description of Customers/
Services Provided:
Customers: Parents of students with disabilities attending Bureau
funded schools.
Services: Special Education/IDEA compliance
(mm/dd/yyyy)
8.
Survey Dates
9.
Division of Performance and Accountability
to
(mm/dd/yyyy)
01/17/2011
05/13/2011
Type of Information Collection Instrument (Check ALL that Apply)
_X_ Intercept
__Other
__Telephone
__Mail
__Web-based
Focus Groups
__Comment Cards
Explain:
10. Survey Development:
(Who assisted in survey content development statistics? Was the survey pretested? How were
improvements integrated? Which of the six topic areas will be addressed?)
This survey is a part of a larger survey that was developed by the National Center for Special
Education Accountability Monitoring (NCSEAM) between 2002-2005. To develop that larger
survey, stakeholder input was obtained from representatives across the nation; the group generated
over 500 items that were submitted to an expert panel. A national validation study of the survey was
done during which item responses were obtained in six states (NM, FL, NH, NJ, MS, GA). During
the validation process parents of Native American students were included. NCSEAM maintains an
item bank of calibrated items.
See Attachment B, Development of the NCSEAM Parent/family Survey, NCSEAM Parent Survey
National Item Validation Study Technical Information.
11. Survey Methodology:
(Use as much space as needed; if necessary include additional explanation on separate page).
Respondent Universe
All parents, guardians or primary caretakers of students with disabilities
(students with an Individual Education Plan in place) who are attending
BIE schools are potential responders to this survey. Each school will be
requested to work with all parents in this category with students
attending their school for completion of the survey.
Sampling Plan/Procedure
This survey will be a census survey. The opportunity will be provided
for all parents of students with disabilities attending BIE schools to
respond to the parent satisfaction survey. While face-to-face gathering
of data will be the data collection process, how that is structured at each
school may vary. Parents may be given the opportunity to respond
when they come to the school for any reason, i.e., registration, IEP
meetings, to check students out of dorms, parent nights and so forth.
For parents not reached in this manner home visits will be made. The
list of individuals to be contacted will be generated via demographic
data for students with disabilities who have a current IEP.
Instrument
Administration
The preferred manner of administration will be face-to-face intercept
with translation to Native Language when needed. This process was
used in the development of the survey and analysis showed no response
pattern difference between Native American and other minorities and
white response patterns.
Expected Response Rate
Confidence Levels:
There are approximately 7,000 students with disabilities (SWD)
receiving their education in Bureau funded schools at any given time.
Each school will be asked to contact at least one parent of each SWD
for survey completion. A response rate of at least 70% is expected
based on: i) the training that has already taken place relative to the fact
that this survey will be coming to the school, ii) the importance of
working hard to get feedback from as many parents as possible as this
is a required indicator in the BIE State Performance Plan for students
with disabilities, iii) the fact that parents will be attending the yearly
IEP meetings in the fall of the year and will be accessible, and iv) staff
is available to assign to this task specifically..
Strategies for dealing
with potential nonresponse bias
Strategies for dealing with potential non-response bias: With a face-toface intercept survey model and the ability to translate for individual
parents a 70% response rate is expected. The survey has four
components but the use of only the first section will address the issues
needed to report in the 2012 APR to OSEP. By limiting the survey to
25 rather than the possible 95 questions it is hoped non-responses will
be limited. The face to face intercept model will be supported by home
visits made by Home School Liaison personnel at each school. This is a
survey that will include, by individual school, a potential pool of
responses ranging from an ‘n’ of one (1) to an ‘n’ of 160. It is
anticipated there will be a varied level of survey completion by school.
All responses will be collated to provide system-wide data for state
Description of any pretesting and peer review of
the methods and/or
instrument
(recommended)
12.
13.
14.
reporting (OSEP). Limitations to school level reporting will be
identified if data is not reliable due to the ‘n’ size. Individual school
reports will not be made if the ‘n’ is too small to infer valid results or if
the ‘n’ is so small as to allow individual student/parent identification.
See Attachment B. Development of the NCSEAM Parent/family Survey,
NCSEAM Parent Survey National Item Validation Study Technical
Information.
Total Number of Initial Contacts/ Expected
Number of Respondents
Estimated Time to Complete Initial
Contact/ Instrument (mins.):
7,000/4900
Total Burden Hours:
(7000 X 5min) + (4900 X 15min) / 60min = 1808 hrs
5 minutes contact
15 minutes instrument
15. Reporting Plan:
Data and analysis of that data will be used to report in the BIE APR to OSEP, February 1, 2012.
The information will also be provided to each school so they may review the information with
the school community, including parents that participated in the survey. Information will be used
to improve parent involvement in the educational programming for their children with
disabilities.
Records will be maintained at BIE/DPA. Copies of all reports will be forwarded to the Office of
Policy Analysis, upon request.
16. Justification, Purpose, and Use:
Survey Justification
and Purpose
The survey referenced in this document will be used to respond to
Indicator 8 of the State Performance Plan (SPP) as is required by the
Office of Special Education (OSEP). PL 108-446 Individuals with
Disabilities Education Act (IDEA) requires the submission of specific
data as a requisite to the receipt of federal funding to support services
for students with disabilities attending public schools and BIE schools.
Indicator 8 of the SPP addresses parental involvement and requires
reporting the percent of parents who indicate their child’s school
facilitated parent involvement which resulted in improved educational
results for the child.
Survey Goals
The survey goal is to have a parent reported indicator of the percent of
parents responding in a manner which indicate that the school involved
them in their child’s education planning and this improved results for
their child.
Utility to Managers
The utility to BIE/DPA will be at several levels. At the Central Office
level the data gathered will be used a) to report as required on this issue
to OSEP, b) to better understand the issue of parent involvement in the
schools, parent satisfaction levels, opportunities for improving services
and plan future system-wide activities to address the issue if needed,
and c) to understand which Bureau funded schools are doing a good job
with the parental involvement issue and which schools would benefit
from technical assistance and support. At the school level a) there will a
better understand how the parents of students with disabilities believe
their school is involving them in a meaningful way in the educational
planning for their child, b) parents will have been given the opportunity
to provided input to the school, and c) the school can address areas
identified as a need by the surveys completed at the school.
Managers in both instances described above would be the education
administrators.
How will the results of
the survey be analyzed
and used?
How will the data be
tabulated?
What Statistical
Techniques will be used
to generalize the results
to the entire customer
population?
How will limitations on
use of data be handled?
If the survey results in a
lower than anticipated
response rate, how will
you address this when
reporting the results?
The surveys will be completed on scan forms. The scoring and analysis
will be conducted through a contractor. The survey in question was
developed by a national center with an OSEP provided grant, the
purpose of which was to systematically gather a set of data relative to
parents of students with disabilities across the United States. In that
development of the survey protocols for analysis were also developed.
In order to maintain consistency and validity of the survey, use of the
same process from data gathering to data analysis, is important. See
Attachment D for supporting documents. Standard-setting for Use of
the NCSEAM Measures to Address the SPP/APR Parent/Family
Indicators
Data will be collected on Scantron forms. Tabulation will be done by
this system.
BIE/DPA will contract for analysis with the NCSEAM identified entity
or equivalent. Acceptable statistical methodology will be used.
Limitations in the data generated by the survey will be addressed in the
analysis. Limitations will be recognized and identified as to source of
limitations so that planning for subsequent survey collections can address
the limitations in a manner determined by the nature of the limitation.
If a single school has a low response rate they will be requested to do
face-to face follow up. If there are lower numbers due to the school not
making a strong effort to contact each parent they will be asked to do
home visits follow up. If a parent has been properly given the
opportunity to respond and chooses to not do so that will be recorded.
When getting the desired responses from parents is not successful the
reporting will identify schools in which there was less than 70%
participation so results can be interpreted accordingly. Example: If all
schools except those on Hopi get a response rate of at least 70% the
report will include as a limitation that the general conclusions may not
apply to Hopi. If there is a lower than desired responses rate overall for
one or more items these will be identified in the report and if
statistically warranted, will be identified as a limitation in results that
must be taken into account in interpretation of the resultant data.
Is this survey intended to measure a Government Performance and Results Act (GPRA) performance
measure? If so, please include an excerpt from the appropriate document. (Use as much space as needed;
if necessary include additional explanation on separate page).
This survey is intended to measure an SPP indicator for reporting to OSEP. See indicator as
follows.
(The indicator is taken directly from the OSEP SPP guidance. In the Monitoring priority
FAPE means Free Appropriate Public Education. LRE Means the Least Restrictive
Environment.
Part B State Performance Plan (SPP) 2010 revision
Overview of the State Performance Plan Development:
Monitoring Priority: FAPE in the LRE
Indicator 8: Percent of parents with a child receiving special education services who report
that schools facilitated parent involvement as a means of improving services and results for
children with disabilities.
(20 U.S.C. 1416(a)(3)(A))
Measurement:
Percent = # of respondent parents who report schools facilitated parent involvement as a
means of improving services and results for children with disabilities divided by the total #
of respondent parents of children with disabilities times 100.
APPENDIX A
Parent Survey – Special Education
IA Form #S-1
OMB Control Number 1040-0001
Expiration Date 03/31/2012
Parent Survey – Special Education
This is a survey for parents of students receiving special education services. Your response will help guide efforts to
improve services and results for children and families. For each statement below, please select one of the following
response choices: very strongly disagree, strongly disagree, disagree, agree, strongly agree, very strongly agree. You
may skip any item you feel does not apply to you or your child.
Use pencil only
Fill in circle completely
Schools Efforts to Partner with Parents
1) I am considered an equal partner with teachers and
other professionals in planning my child’s program.
2) I was offered special assistance (such as child care)
so that I could participate in the Individualized
Educational Program (IEP) meeting.
3) At the IEP meeting, we discussed how my child
would participate in statewide assessments.
4) At the IEP meeting, we discussed accommodations
and modifications that my child would need.
5) All of my concerns and recommendations were
documented on the IEP.
6) Written justification was given for the extent that
my child would not receive services in the regular
classroom.
7) I was given information about organizations that
offer support for parents of students with disabilities.
8) I have been asked for my opinion about how well
special education services are meeting my child’s
needs.
9) My child’s evaluation report is written in terms I
understand.
10) Written information I receive is written in an
understandable way.
11) Teachers are available to speak with me.
12) Teachers treat me as a team member.
13) Teachers and administrators seek out parent input.
14) Teachers and administrators show sensitivity to
the needs of students with disabilities and their
families.
15) Teachers and administrators encourage me to
participate in the decision-making process.
16) Teachers and administrators respect my cultural
heritage.
17) Teachers and administrators ensure that I have
fully understood the Procedural Safeguards [the rules
in federal law that protect the rights of parents].
Very
Strongly
Disagree
Strongly
Disagree
Disagree
Agree
Strongly
Agree
Very
Strongly
Agree
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
Schools Efforts to Partner with Parents
18) The school has a person on staff who is available to
answer parents’ questions.
19) The school communicates regularly with me
regarding my child’s progress on IEP goals.
20) The school gives me choices with regard to
services that address my child’s needs.
21) The school offers parents training about special
education issues.
22) The school offers parents a variety of ways to
communicate with teachers.
23) The school gives parents the help they may need
to play an active role in their child’s education.
24) The school provides information on agencies that
can assist my child in the transition from school.
25) The school explains what options parents have if
they disagree with a decision of the school.
State of Residence
Child’s Grade
O
Child’s Age in Years
Child’s Age When First Referred to
Early Intervention or Special Education
O Under 1 year OR Age in years
Is the child Hispanic or Latino/Latina
Yes
No (circle one)
Child’s Race (Select one or more)
1 O White
2 O Black / African American
3 O Asian
4 O Native Hawaiian or Pacific Islander
5 O American Indian or Alaska Native
Very
Strongly
Disagree
Strongly
Disagree
Disagree
Agree
Strongly
Agree
Very
Strongly
Agree
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
Child’s Primary Exceptionality / Disability
(Bubble only one)
O Autism
O Deaf-Blindness
O Deafness
O Developmental Delay
O Emotional Disturbance
O Hearing Impairment
O Mental Retardation
O Multiple Disability
O Orthopedic
O Other Health
O Specific Learning Disability
O Speech or Language Impairment
O Traumatic Brain Injury
O Visual Impairment
THANK YOU FOR YOUR
PARTICIPATION !!
Paperwork Reduction Act Statement: This information is collected to properly identify each student’s instructional and residential program classification. The
information is supplied by a respondent to obtain or retain a benefit that is to provide appropriate schooling. It is estimated that responding to the request will take an
average of 20 minutes to complete. This includes the amount of time it takes to gather the information and fill out the form. If you wish to make comments on the form,
please send them to the Information Collection Clearance Officer-Indian Affairs, 1849 C Street, NW, Washington, DC 20240. NOTE: Comments, names and
addresses of commenters are available for public review during regular business hours. If you wish us to withhold this information you must state this prominently at
the beginning of your comment. We will honor your request to the extent allowable by law. In compliance with the Paperwork Reduction Act of 1995, as amended, this
collection has been reviewed by the Office of Management and Budget and assigned OMB Control #1040-0001 and an expiration date of March 31, 2012. Please
note that an agency may not conduct or sponsor, and a person is not required to report to, a collection of information unless there is a valid OMB control number.
APPENDIX B
Development of NCSEAM Parent/Family Surveys
NCSEAM Parent Survey
National Item Validation Study
Technical Information
Development of the NCSEAM Parent / Family Surveys
One of the goals for the National Center for Special Education Accountability Monitoring (NCSEAM)
has been to focus attention on the importance of family participation in early intervention and special
education. In January 2002, NCSEAM established the Parent/Family Involvement Workgroup to
provide guidance on the measures of families’ perceptions and involvement in the early intervention and
special education process. The instrument development work has been coordinated by Dr. Batya
Elbaum, Associate Professor of Education and Psychology at the University of Miami. Dr. William P.
Fisher, JR. of MetaMetrics, Inc. has served as the projects measurement consultant.
Several Important principals guided development of the surveys:
Content. Instrument content should be generate and vetted by all important stakeholders in the early
intervention and special education system, especially families.
Construct Definition. Hypotheses concerning the constructs defined by the stake-holder generated items
should be tested through the application of Rasch modeling.
Reliability. Measurement tools should have a minimum measurement reliability of .90 and yield at least
four statistically separable measurement ranges.
Interpretability. The meaning of the measures should be transparent and easy to understand.
Acceptability. The length and readability of the survey should be kept within parameters acceptable to
the intended respondents.
Usefulness. The measures should have significant, demonstrable relevance to services and results for
families and children.
Survey content
The measure development process began with a comprehensive review of the literature on (a) legal
requirements and best practice regarding family involvement in early intervention and special education;
(b) theoretical perspectives and empirical studies on the relationship between parent/family involvement
and outcomes for children with disabilities and their families; (c) models of parent involvement and
relevant empirical findings in general education; and (d) instrumentation, particularly surveys and
interview protocols, related to the aforementioned topics.
In spring of 2003, NCSEAM sponsored stakeholder input in 6 states: New Mexico, New Hampshire,
Mississippi, Kentucky, California and Florida. In each state, participants were asked to generate items
representing important aspects of families’, experience with the early intervention and special education
process.
The complete list of almost 500 non-duplicated items was submitted to an expert panel convened by
PACER. The panel was asked to rate the importance of each item using a 4-point scale from “not so
important” to “extremely important”. 384 items (78%) were rated as very important or extremely
important. These items were deemed to constitute the core of the NCSEAM survey item bank.
PACER also sent an e-mail inquiry to organizations in its network, asking whether these organizations
had used any survey instruments to evaluate parent involvement in and/or perceptions regarding services
for children needs and their families. Several organizations forwarded questionnaires whose items were
checked against the existing item bank to locate any items with new content. Project Forum, which
works collaboratively with NASDSE, conducted a survey of state directors of special education to
ascertain whether any other states were implementing parent surveys. Several states provided copies of
their existing parent surveys.
Concurrently, Dr. Fisher and Dr. Elbaum undertook a re-analysis of five years of data from the parent
surveys that Florida’s Part B monitoring division had administered since 1999, through a discretionary
project to the University of Miami, in districts participating in its focused monitoring activities. Data
were available for over 30,000 respondents, Results of the Rasch analysis indicated that the items did
not reveal a unitary variable structure but rather articulate four separate constructs: (a) schools’ efforts to
partner with parents, (b) parents’ perception of the quality of special education services provided to their
children, (c) parents’ perception of outcomes of services for their children, and (d) parents’ reports of
the way they are involved in the special education process. Item calibrations and person measures were
calculated for each of the four constructs. The items generated through the NCSEAM’s stakeholder
process were then grouped into these four categories plus a fifth category, not represented in the Florida
survey items, addressing family outcomes.
Two conceptual models were developed reflecting the relations among constructs for Part C and Part B.
These are represented below.
Early Intervention Model
Early Intervention
services provided both
to family and child
Early
Intervention
Partnership
Efforts and Quality
of Services
FAMILY
CHILD
Impact on
Family
Impact on
Child
Given that efforts to engage families in a collaborative relationship are central to the provisions of early
intervention services, partnership efforts and quality of services are conceptualized as a single construct
reflecting family-centered services.
Special Education Model
Special Education
Services provided to child
with consent/support
collaboration of family
Special
Education
Partnership Efforts
Quality of Services
FAMILY
Partcipation
CHILD
Impact on
Family
Impact on
Child
In the special education model, partnership efforts on the part of the school or districts are reciprocated
through parents’ active involvement in the special education process.
Examination of survey instruments being used in other states revealed a number of tools with similar
item content. Permission was obtained from the New York Part B lead agency and the Connecticut Part
C lead agency to analyze their survey data, redacted of any identifying data. Survey items for each tool
were separately calibrated using the same measurement approach that was applied to the Florida survey
data. Items from the New York survey were found to group into the same four categories as those from
the Florida parent survey. The item calibrations of seven items with similar content from the New York
and Florida surveys were found to have a correlation of r = .98. Calibrations for four Connecticut and
Florida survey items with similar content had a similarly high correlation. These findings provided
strong support for the consistency of the NCSEAM construct definitions and the invariance of item
calibration across different populations of respondents.
In October 2003, draft items articulating each of the posited constructs for Part B and Part C were
reviewed by the NCSEAM Parent Involvement Workgroup. The Workgroup made the following
recommendations:
The Part B items exhaustively covered all content that stakeholders had identified as important to
families. It was noted that some items might be excessively long and/or at too high a reading
level. The workgroup recommended that further input be sought out with regard to the Part C
instrument. Consequently, in November and December, 2003, the Part C items were reviewed by
parent groups in Florida (one location), Tennessee (two locations) and New Jersey (three
locations). The Florida and New Jersey groups included significant representation of Spanish
speaking families. All the groups provided general feedback on the survey as well as specific
recommendations regarding item additions, item deletions and rewording. Additional input was
obtained from university experts in the field of early interventions.
The Workgroup also considered the applicability of either Part B or Part C items, or some
combination of the two, to 619. The consensus of the Workgroup was that further work was
required in order to produce relevant and unambiguous items for families receiving early
childhood special education services. Item development for a 619 family survey is expected to be
completed in 2006.
Between October 2004 and February 2005, NCSEAM conducted the National Item Validation Study in
order to obtain item responses from a nationally representative sample of families. Eight Part C Lead
Agencies (NM, FL, LA, MA, IA, CA, NJ, GA) and 6 SEAs (NM, FL, NH, NJ, MS, GA) agreed to
solicit the participation of families in their states. To reduce the response burden on participating
families, the number of items to which any given family would be asked to respond was reduced by
dividing the Part B and Part C items, separately, into three groups: a common group, to appear on each
of two alternate forms; and two unique groups of items, each of which would appear on one form only.
Optically scan-able forms were printed and distributed to participating states. Each SEA was provided
with a target sampling plan and instructions on administration of the survey. Participation recruitment
strategies and modes of administration of the survey differed by state. Mode and language of
administration of the survey were recorded so that it would be possible to examine whether these
variables were associated with variance in item calibration. Survey responses were obtained from a total
of approximately 1750 families receiving Early Intervention services and 2600 parents of children
receiving special education services.
Data analysis from the National Validation Study confirmed the high reliability and validity of the
measurement scales. Summary information on these analysis is included in the NCSEAM power point
presentation from the August 2005 OSEP Summer Institute. Output from these analysis, as well as
additional technical information, is also available on the NCSEAM website.
APPENDIX C
NCSEAM PARENT SURVEY
NATIONAL ITEM VALIDATION STUDY
TECHNICAL INFORMATION
This site is under construction. New material will be added as it becomes available. Check back
regularly for updates.
The research presented here marks an auspicious start to an ambitious new direction in work of this
kind. Far more questions have been raised than answers have been provided. Much remains to be done,
especially in three particular areas: 1) establishing when and where and for whom particular items and
sets of items are consistently more or less agreeable for one group of parents/families than another (DIF
analysis), 2) monitoring the invariance properties of the scales across samples and over time, and
3) applying the information provided by the measures in quality improvement efforts. If you have
questions or comments, or if you have data or analyses you'd like to share, please let us know at
[email protected] Thanks.
I. General information
A. The survey forms with all of the items used in the National Item Validation Study are
available for viewing here in PDF format:
1. Part B
a) Form 1
b) Form 2
2. Part C
a) Form 1
b) Form 2
B. The surveys were designed to produce data that would conform with the principles of
Fundamental Measurement Theory, as this is implemented in Rasch’s models for unidimensional measurement.
1. Go to http://www.rasch.org/rmt for the full text of Rasch Measurement Transactions.
2. Go to http://www.rasch.org for more information on software, journals, books,
consultants, training seminars, etc.
3. For survey design recommendations, see Fisher (2000) and Linacre (1993).
4. The NCSEAM surveys, item banks, analysis control variables, item anchor values,
output files, and statistical comparisons are provided in the expectation that others
interested in employing or improving these tools will find everything they need to do the
job. Please address questions about the survey data analyses and measurement scales to
mailto: [email protected].
C. All WINSTEPS control and output files are provided in MS Word format for viewing
convenience, though the sheer volume of output produced prohibits attention to the details of
perfect pagination.
1. To use the control files in WINSTEPS analyses they will have to be saved in the textonly format.
2. All of the output files include variable maps, summary statistics, individual item
statistics, and principal components factor analyses of the items’ standardized residuals.
3. See the WINSTEPS User Manual for more information on the control file variables,
and go to the WINSTEPS.com web site for free software, control files set up to run
example data analyses from readily available books, etc.
4. For links to other Rasch analysis programs (RUMM, CONQUEST, and others) that
ought to be capable of reproducing the WINSTEPS analyses, go to
http://www.winsteps.com/rasch.htm
II. Part B
A. Data files
1. SPSS format
a) Original data as scanned
b) Responses from parents of children ages 5 and over only
c) Ages 3-5 to be addressed in forthcoming 619 study
2. WINSTEPS (ASCII DOS text) format
a) All demographics
b) All rating scale items
B. The Sample
1. Child age groups represented vs. served under IDEA
2. Disability classifications represented vs. served under IDEA
3. Ethnic groups represented vs. served under IDEA
C. The Scales
1. Partnership Efforts (SPP indicator)
a) Final scale as standardized
(1) WINSTEPS control file: BEff3cSTD.con.doc
(a) Descriptions of the meaning of the control variables are given
in this file only
(b) See the WINSTEPS User Manual for more information
(2) WINSTEPS output file: BEff3cSTD.out.doc
b) Validity and invariance studies (not all in final standardized metric)
(1) Original 6-category data analysis
(a) WINSTEPS control file: BEff6c.con.doc
(b) WINSTEPS output file: BEff6c.out.doc
(2) Optimized 3-category data analysis
(a) WINSTEPS control file: BEff3c.con.doc
(b) WINSTEPS output file: BEff3c.out.doc
(3) Sub-sample scaling contrasts
(a) Item calibrations
(i) Web vs. paper administration
(ii) By survey form
(a) Form 1 items vs common items
(b) Form 2 items vs common items
(iii) By language
(iv) Self-administered or read to
(v) Ethnicity
(vi) State of residence
(a) GA vs NH
(b) others
(vii) By age of child
(a) Age 5 vs ages 6-10 (r=0.95)
(b) Age 5 vs ages 11-13 (r=0.92)
(c) Age 5 vs ages 14-21 (r=0.94)
(d) Ages 6-10 vs ages 11-13 (r=0.99)
(e) Ages 6-10 vs ages 14-21 (r=0.99)
(f) Ages 11-13 vs ages 14-21 (r=0.99)
(g) Average Ratings by Age Groups
(b) Parent measures
(i) By form
(ii) Unique vs common
(iii) Random
(iv) Agreeable vs. disagreeable
(4) Model fit and differential functioning analyses
(a) Item calibrations
(i) Web vs. paper administration
(ii) By language
(iii) Self-administered or read to
(iv) Ethnicity
(v) State of residence
(vi) Age of child
(b) Parent measures
(i) By form
(ii) Unique vs common
(iii) Random
(iv) Agreeable vs. disagreeable
c) Scale reduction reliability and precision studies
(1) Item calibrations
(a) Strata
(b) Reproducibility
(2) Parent measures
(a) Strata
(b) Reproducibility
2. Impact on Family
a) Final scale as standardized
(1) WINSTEPS control file: BImpF3cSTD.con.doc
(2) WINSTEPS output file: BImpF3cSTD.out.doc
b) Validity and invariance studies (not shown in final standardized metric)
(1) Original 6-category data analysis
(a) WINSTEPS control file: BImpF6c.con.doc
(b) WINSTEPS output file: BImpF6c.out.doc
(2) Optimized 3-category data analysis
(a) WINSTEPS control file: BImpF3c.con.doc
(b) WINSTEPS output file: BImpF3c.out.doc
(3) Sub-sample scaling contrasts
(a) Item calibrations
(i) Web vs. paper administration
(ii) By survey form
(a) Form 1 items vs common items
(b) Form 2 items vs common items
(iii) By language
(iv) Self-administered or read to
(v) Ethnicity
(vi) State of residence
(a) GA vs NH
(b) others
(vii) By age of child
(b) Parent measures
(i) By form
(ii) Unique vs common
(iii) Random
(iv) Agreeable vs. disagreeable
c) Scale reduction reliability and precision studies
(1) Item calibrations
(a) Strata
(b) Reproducibility
(2) Parent measures
(a) Strata
(b) Reproducibility
3. Quality of Services
a) Final scale as standardized
(1) WINSTEPS control file: BQua3cSTD.con.doc
(2) WINSTEPS output file: BQua3cSTD.out.doc
b) Validity and invariance studies (not shown in final standardized metric)
(1) Original 6-category data analysis
(a) WINSTEPS control file: BQua6c.con.doc
(b) WINSTEPS output file: BQua6c.out.doc
(2) Optimized 3-category data analysis
(a) WINSTEPS control file: BQua3c.con.doc
(b) WINSTEPS output file: BQua3c.out.doc
(3) Subsample scaling contrasts
(a) Item calibrations
(i) Web vs. paper administration
(ii) By survey form
(a) Form 1 items vs common items
(b) Form 2 items vs common items
(iii) By language
(iv) Self-administered or read to
(v) Ethnicity
(vi) State of residence
(vii) By age of child
(b) Parent measures
(i) By form
(ii) Unique vs common
(iii) Random
(iv) Agreeable vs. disagreeable
c) Scale reduction reliability and precision studies
(1) Item calibrations
(a) Strata
(b) Reproducibility
(2) Parent measures
(a) Strata
(b) Reproducibility
4. Parent Participation
a) Final scale as standardized
(1) WINSTEPS control file: BPar3c2STD.con.doc
(2) WINSTEPS output file: BPar3c2STD.out.doc
b) Validity and invariance studies (not shown in final standardized metric)
(1) Original 6-category data analysis
(a) WINSTEPS control file: BQua6c.con.doc
(b) WINSTEPS output file: BQua6c.out.doc
(2) Optimized 3-category data analysis
(a) WINSTEPS control file: BPar3c2.con.doc
(b) WINSTEPS output file: BPar3c2.out.doc
(3) Sub-sample scaling contrasts
(a) Item calibrations
(i) Web vs. paper administration
(ii) Form 1 vs Form 2 common items
(iii) By language
(iv) Self-administered or read to
(v) Ethnicity
(vi) State of residence
(vii) By age of child
(b) Parent measures
(i) Items unique to form vs common items
(ii) Random items
(iii) Agreeable vs. disagreeable items
c) Scale reduction reliability and precision studies
(1) Item calibrations
(a) Strata
(b) Reproducibility
(2) Parent measures
(a) Strata
(b) Reproducibility
d) Measures by groups
(1) Ethnicity
(2) Language
(3) Child’s age
(4) Survey form 1 vs. form 2
(5) Completed independently or read to
III. Part C
A. Data files
1. SPSS format
a) Original data as scanned
b) Responses from parents of children ages birth to three only
2. WINSTEPS (ASCII DOS text) format
a) All demographics
b) All rating scale items
B. The Sample
1. Child age groups represented vs. served under IDEA
2. Ethnic groups represented vs. served under IDEA
C. The Scales
1. Impact on Family (SPP indicator)
a) Final scale as standardized
(1) WINSTEPS control file: CImpF4cSTD.con.doc
(2) WINSTEPS output file: CImpF4cSTD.out.doc
b) Validity and invariance studies (not shown in final standardized metric)
(1) Original 6-category data analysis
(a) WINSTEPS control file: CImpF6c.con.doc
(b) WINSTEPS output file: CImpF6c.out.doc
(2) Optimized 4-category data analysis
(a) WINSTEPS control file: CImpF4c.con.doc
(b) WINSTEPS output file: CImpF4c.out.doc
(3) Sub-sample scaling contrasts
(a) Item calibrations
(i) Web vs. paper administration
(ii) Form 1 vs Form 2 common items
(iii) By language
(iv) Self-administered or read to
(v) Ethnicity
(vi) State of residence
(vii) By age of child
(b) Parent measures
(i) Items unique to form vs common items
(ii) Random items
(iii) Agreeable vs. disagreeable items
c) Scale reduction reliability and precision studies
(1) Item calibrations
(a) Strata
(b) Reproducibility
(2) Parent measures
(a) Strata
(b) Reproducibility
2. Family-Centered Services
a) Final scale as standardized
(1) WINSTEPS control file: CEffQua3c2STD.con.doc
(2) WINSTEPS output file: CEffQua3c2STD.out.doc
b) Validity and invariance studies (not shown in final standardized metric)
(1) Original 6-category data analysis
(a) WINSTEPS control file: CEffQua6c.con.doc
(b) WINSTEPS output file: CEffQua6c.out.doc
(2) Optimized 3-category data analysis
(a) WINSTEPS control file: CEffQua3c2.con.doc
(b) WINSTEPS output file: CEffQua3c2.out.doc
(3) Sub-sample scaling contrasts
(a) Item calibrations
(i) Web vs. paper administration
(ii) Form 1 vs Form 2 common items
(iii) By language
(iv) Self-administered or read to
(v) Ethnicity
(a) Asian vs Blacks
(b) American Indians vs Hispanics
(c) Blacks vs Whites
(vi) State of residence
(vii) By age of child
(a) Birth to 1 vs 1 to 2
(b) Birth to 1 vs 2 to 3
(c) 1 to 2 vs 2 to 3
(b) Parent measures
(i) Items unique to form vs common items
(a) Form 1 vs common items
(b) Form 2 vs common items
(ii) Random items
(iii) Agreeable vs. disagreeable items
c) Scale reduction reliability and precision studies
(1) Item calibrations
(a) Strata
(b) Reproducibility
(2) Parent measures
(a) Strata
(b) Reproducibility
d) Measures by groups
(1) Ethnicity
(2) Language
(3) Child’s age
(a) At referral
(b) At time survey completed
(4) Web vs paper administration mode
(5) Survey form 1 vs. form 2
(6) Completed independently or read to
D. Setting the SPP/APR standards: The July 2005 NCSEAM stakeholders meeting
1. See Stone (2001) for concept as applied in education
2. See here for a summary of the NCSEAM standard setting process
IV. Statistical Associations
A. Part B vs Part C Comparison of Select Impact on Family item calibrations
B. Part B Measures by Partnership Efforts Ranges
1. Quality of Services
2. Impact on Family
3. Parent Participation
C. Part C Quality of Service Measures by Impact on Family Ranges
D. Correlations
1. Part B correlations
2. Part C correlations
E. Regression models
F. Discriminant function analysis
G. Structural Equation Models
V. Bibliography on Measurement Theory & Practice
Andersen, E. B. (1973). Conditional inference for multiple choice questionnaires. British Journal of Mathematical
and Statistical Psychology, 26, 31-44.
Andersen, E. B. (1977). The logistic model for m answer categories. In W. E. Kempf & B. H. Repp (Eds.),
Mathematical models for social psychology. Vienna, Austria: Hans Huber.
Andersen, E. B. (1977). Sufficient statistics and latent trait models. Psychometrika, 42(1), 69-81.
Andrich, D. (1978). A binomial latent trait model for the study of Likert-style attitude questionnaires. British Journal
of Mathematical and Statistical Psychology, 31, 84-98.
Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43, 357-374.
Andrich, D. (1978). Relationships between the Thurstone and Rasch approaches to item scaling. Applied
Psychological Measurement, 2, 449-460.
Andrich, D. (1979). A model for contingency tables having an ordered response classification. Biometrics, 35, 403415.
Andrich, D. (1988). Sage University Paper Series on Quantitative Applications in the Social Sciences. Vol. series no.
07-068: Rasch models for measurement. Beverly Hills, California: Sage Publications.
Andrich, D. (1989). Constructing fundamental measurements in social psychology. In J. A. Keats, R. Taft, R. A.
Heath & S. H. Lovibond (Eds.), Mathematical and theoretical systems. Proceedings of the 24th International
Congress of Psychology of the International Union of Psychological Science, Vol. 4 (pp. pp. 17-26).
Amsterdam, Netherlands: North-Holland.
Andrich, D. (1989). Distinctions between assumptions and requirements in measurement in the social sciences. In J.
A. Keats, R. Taft, R. A. Heath & S. H. Lovibond (Eds.), Mathematical and Theoretical Systems: Proceedings
of the 24th International Congress of Psychology of the International Union of Psychological Science, Vol. 4
(pp. 7-16). North-Holland: Elsevier Science Publishers.
Andrich, D. (2002). Understanding resistance to the data-model relationship in Rasch's paradigm: A reflection for the
next generation. Journal of Applied Measurement, 3(3), 325-59.
Andrich, D. (2004, January). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical
Care, 42(1), I-7--I-16.
Andrich, D., & Douglas, G. A. (Eds.). (1982). Rasch models for measurement in educational and psychological
research [Special issue]. Education Research and Perspectives, 9(1), 5-118.
Andrich, D., & Styles, I. M. (1998, Dec). The structural relationship between attitude and behavior statements from
the unfolding perspective. Psychological Methods, 3(4), 454-469.
Atchison, B. T., Fisher, A. G., & Bryze, K. (1998, Nov-Dec). Rater reliability and internal scale and person response
validity of the School Assessment of Motor and Process Skills. American Journal of Occupational Therapy,
52(10), 843-850.
Bode, R. K., Lai, J.-S., Cella, D., & Heinemann, A. W. (2003, April). Issues in the development of an item bank.
Archives of Physical Medicine and Rehabilitation, 84(4 (Part 2)), S52-S60.
Bond, T., & Fox, C. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah,
New Jersey: Lawrence Erlbaum Associates [http://homes.jcu.edu.au/~edtgb/book/].
Chen, C. C., Heinemann, A., Bode, R., Granger, C., & Mallinson, T. (2004). Impact of pediatric rehabilitation
services on children’s functional outcomes. American Journal of Occupational Therapy, 58, 44-53.
Fischer, G. H., & Molenaar, I. (1995). Rasch models: Foundations, recent developments, and applications. New
York, New York: Springer-Verlag.
Fisher, A. G., Bryze, K. A., & Atchison, B. T. (2000). Naturalistic assessment of functional performance in school
settings: Reliability and validity of the School AMPS scales. Journal of Outcome Measurement, 4(1), 491512.
Fisher, A. G., Bryze, K. A., Granger, C. V., Haley, S. M., Hamilton, B. B., Heinemann, A. W., Puderbaugh, J. K.,
Linacre, J. M., Ludlow, L. H., McCabe, M. A., & Wright, B. D. (1994). Applications of conjoint
measurement to the development of functional assessments. International Journal of Educational Research,
21(6), 579-593.
Fisher, W. P., Jr. (1993, April). Measurement-related problems in functional assessment. American Journal of
Occupational Therapy, 47(4), 331-338.
Fisher, W. P., Jr. (1994). The Rasch debate: Validity and revolution in educational measurement. In M. Wilson (Ed.),
Objective measurement: Theory into practice. Vol. II (pp. 36-72). Norwood, New Jersey: Ablex Publishing
Corporation.
Fisher, W. P., Jr. (1997). Physical disability construct convergence across instruments: Towards a universal metric.
Journal of Outcome Measurement, 1(2), 87-113.
Fisher, W. P., Jr. (1997, June). What scale-free measurement means to health outcomes research. Physical Medicine &
Rehabilitation State of the Art Reviews, 11(2), 357-373.
Fisher, W. P., Jr. (1998). A research program for accountable and patient-centered health status measures. Journal of
Outcome Measurement, 2(3), 222-239.
Fisher, W. P., Jr. (1999). Foundations for health status metrology: The stability of MOS SF-36 PF-10 calibrations
across samples. Journal of the Louisiana State Medical Society, 151(11), 566-578.
Fisher, W. P., Jr. (2000). Objectivity in psychosocial measurement: What, why, how. Journal of Outcome
Measurement, 4(2), 527-563.
Fisher, W. P., Jr. (2001). Introduction to measurement in rehabilitation. In R. W. Massof & L. Lidoff (Eds.), Issues in
Low Vision Rehabilitation Service Delivery, Policy and Funding (pp. 159-83). New York, New York: AFB
Press.
Fisher, W. P., Jr. (2004, October). Meaning and method in the social sciences. Human Studies: A Journal for
Philosophy and the Social Sciences, 27(4), 429-54.
Fisher, W. P., Jr. (2004, Autumn). Ordinal vs. ratio revisited again. Rasch Measurement Transactions, 18(2), 980-2
[http://www.rasch.org/rmt/rmt182.pdf].
Fisher, W. P., Jr., Eubanks, R. L., & Marier, R. L. (1997). Equating the MOS SF36 and the LSU HSI physical
functioning scales. Journal of Outcome Measurement, 1(4), 329-362.
Fisher, W. P., Jr., & Fisher, A. G. (1993). Applications of Rasch analysis to studies in occupational therapy. Physical
Medicine and Rehabilitation Clinics of North America, 4(3), 551-569C. V. Granger & G. E. Gresham (Eds.),
New developments in functional assessment.
Fisher, W. P., Jr., Harvey, R. F., & Kilgore, K. M. (1995). New developments in functional assessment: Probabilistic
models for gold standards. NeuroRehabilitation, 5(1), 3-25.
Fisher, W. P., Jr., Harvey, R. F., Taylor, P., Kilgore, K. M., & Kelly, C. K. (1995, February). Rehabits: A common
language of functional assessment. Archives of Physical Medicine and Rehabilitation, 76(2), 113-122.
Fisher, W. P., Jr., & Karabatsos, G. (2005). Fundamental measurement for the MEPS and CAHPS quality of care
scales. In N. Bezruczko (Ed.), Rasch measurement in the health sciences (pp. 373-410). Maple Grove, MN:
JAM Press.
Fisher, W. P., Jr., & Wright, B. D. (Eds.). (1994). Applications of probabilistic conjoint measurement. International
Journal of Educational Research, 21(6), 557-664.
Heinemann, A. W., Linacre, J. M., Wright, B. D., Hamilton, B. B., & Granger, C. V. (1993). Relationships between
impairment and physical disability as measured by the Functional Independence Measure. Archives of
Physical Medicine and Rehabilitation, 74(6), 566-573.
Linacre, J. M. (1993). Rasch-based generalizability theory. Rasch Measurement Transactions, 7(1), 283-284;
[http://www.rasch.org/rmt/rmt71h.htm].
Linacre, J. M. (1998). Detecting multidimensionality: Which residual data-type works best? Journal of Outcome
Measurement, 2(3), 266-83.
Linacre, J. M. (1999). Investigating rating scale category utility. Journal of Outcome Measurement, 3(2), 103-22.
Linacre, J. M. (1999). Understanding Rasch measurement: Estimation methods for Rasch measures. Journal of
Outcome Measurement, 3(4), 382-405.
Linacre, J. M. (2002). Understanding Rasch measurement: Optimizing rating scale category effectiveness. Journal of
Applied Measurement, 3(1), 85-106.
Massof, R. W. (2002, August). The measurement of vision disability. Optometry and Vision Science, 79(8), 516-52.
Michell, J. (1986). Measurement scales and statistics: A clash of paradigms. Psychological Bulletin, 100, 398-407.
Michell, J. (1990). An introduction to the logic of psychological measurement. Hillsdale, New Jersey: Lawrence
Erlbaum Associates.
Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of
Psychology, 88, 355-383.
Michell, J. (1999). Measurement in psychology: A critical history of a methodological concept. Cambridge:
Cambridge University Press.
Michell, J. (2000, October). Normal science, pathological science and psychometrics. Theory & Psychology, 10(5),
639-667.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (reprint, with Foreword and
Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks
Paedogogiske Institut.
Rasch, G. (1961). On general laws and the meaning of measurement in psychology. In Proceedings of the fourth
Berkeley symposium on mathematical statistics and probability (pp. 321-333). Berkeley, California:
University of California Press.
Rasch, G. (1966). An individualistic approach to item analysis. In P. F. Lazarsfeld & N. W. Henry (Eds.), Readings in
mathematical social science (pp. 89-108). Chicago, Illinois: Science Research Associates.
Rasch, G. (1966). An item analysis which takes individual differences into account. British Journal of Mathematical
and Statistical Psychology, 19, 49-57.
Rasch, G. (1977). On specific objectivity: An attempt at formalizing the request for generality and validity of
scientific statements. Danish Yearbook of Philosophy, 14, 58-94.
Smith, R. M. (1996). A comparison of methods for determining dimensionality in Rasch measurement. Structural
Equation Modeling, 3(1), 25-40.
Smith, R. M. (Ed.). (1997, June). Physical Medicine & Rehabilitation State of the Art Reviews: Outcome
Measurement (Philadelphia, PA), 11(2), 261-428.
Smith, R. M. (2000). Fit analysis in latent trait measurement models. Journal of Applied Measurement, 1(2), 199-218.
Smith, R. M., Schumacker, R. E., & Bush, M. J. (1998). Using item mean squares to evaluate fit to the Rasch model.
Journal of Outcome Measurement, 2(1), 66-78.
Smith, R. M., & Taylor, P. (2004). Equating rehabilitation outcome scales: Developing common metrics. Journal of
Applied Measurement, 5(3), 229-42.
Stone, G. (2001). Understanding Rasch measurement: Objective standard setting (or truth in advertising). Journal of
Applied Measurement, 2(2), 187-201.
Umar, J. (1999). Item banking. In G. N. Masters & J. P. Keeves (Eds.), Advances in measurement in educational
research and assessment (pp. 207-19). New York: Pergamon.
Wilson, M. (1999). Measurement of developmental levels. In G. N. Masters & J. P. Keeves (Eds.), Advances in
measurement in educational research and assessment (pp. 151-63). New York: Pergamon.
Wright, B. D. (1977). Solving measurement problems with the Rasch model. Journal of Educational Measurement,
14(2), 97-116.
Wright, B. D. (1984). Despair and hope for educational measurement. Contemporary Education Review, 3(1), 281-
288.
Wright, B. D. (1985). Additivity in psychological measurement. In E. Roskam (Ed.), Measurement and personality
assessment. North Holland: Elsevier Science Ltd.
Wright, B. D. (1996, Winter). Reliability and separation. Rasch Measurement Transactions, 9(4), 472
[http://www.rasch.org/rmt/rmt94n.htm].
Wright, B. D. (1997, June). Fundamental measurement for outcome evaluation. Physical Medicine & Rehabilitation
State of the Art Reviews, 11(2), 261-88.
Wright, B. D. (1997, Winter). A history of social science measurement. Educational Measurement: Issues and
Practice, 16(4), 33-45, 52 [http://209.41.24.153/memo62.htm].
Wright, B. D. (1999). Fundamental measurement for psychology. In S. E. Embretson & S. L. Hershberger (Eds.), The
new rules of measurement: What every educator and psychologist should know (pp. 65-104
[http://www.rasch.org/memo64.htm]). Hillsdale, New Jersey: Lawrence Erlbaum Associates.
Wright, B. D. (1999). Rasch measurement models. In G. N. Masters & J. P. Keeves (Eds.), Advances in measurement
in educational research and assessment (pp. 85-97). New York: Pergamon.
Wright, B. D., & Bell, S. R. (1984). Item banks: What, why, how. Journal of Educational Measurement, 21(4), 331345.
Wright, B. D., & Linacre, J. M. (1989). Observations are always ordinal; measurements, however, must be interval.
Archives of Physical Medicine and Rehabilitation, 70(12), 857-867 [http://www.rasch.org/memo44.htm].
Wright, B. D., Linacre, J. M., & Heinemann, A. W. (1993). Measuring functional status in rehabilitation. Physical
Medicine and Rehabilitation Clinics of North America, 4(3), 475-491C. V. Granger & G. E. Gresham (Eds.),
New developments in functional assessment.
Wright, B. D., & Lunz, M. (1987). Standards combining expert judgment, mastery level and statistical confidence
(Tech. Rep. No. 37). Chicago, Illinois: MESA Psychometric Laboratory, Department of Education, University
of Chicago.
Wright, B. D., & Masters, G. N. (1982). Rating scale analysis: Rasch measurement. Chicago, Illinois: MESA Press.
Wright, B. D., & Mok, M. (2000). Understanding Rasch measurement: Rasch models overview. Journal of Applied
Measurement, 1(1), 83-106.
Wright, B. D., & Stone, M. H. (1979). Best test design: Rasch measurement. Chicago, Illinois: MESA Press.
Wright, B. D., Stone, M., & Enos, M. (2000). The evolution of meaning in practice. Rasch Measurement
Transactions, 14(1), 736 [http://www.rasch.org/rmt/rmt141g.htm].
APPENDIX D
Frequently Asked Questions
Frequently Asked Questions
1. Is there any cost to states for using the NCSEAM survey?
a. There is no charge to states to access the NCSEAM items. States may copy and use the
NCSEAM-designed form [ Part B or Part C ]at no cost.
b. NCSEAM has covered the cost of design and set-up for the early childhood and school
age survey forms available from Scantron, Inc. States may select other vendors to
produce similar forms.
c. Information on vendors that can provide services related to printing, mailing, scanning,
and/or data analysis for all the NCSEAM instruments are available on the NCSEAM
website survey page.
d. Costs related to administration of the survey, customization of forms, additional data
analyses, etc. are the responsibility of states.
2. Can we remove items from the survey that we don’t want to include?
a. Yes, but only if they are replaced with other items from the NCSEAM item bank that
have equivalent calibrations. This is necessary to maintain measurement reliability.
b. This option can be explored in consultation with your own technical assistants or with
consultants recommended by NCSEAM.
3. Can we add items to the survey?
a. Yes. However, until responses to the new item are analyzed in the context of the entire
set of items, it is uncertain what effect the new items will have on scale reliability.
b. This option can be in explored in consultation with your own technical assistants, or with
consultants recommended by NCSEAM.
4. Can we adjust the wording of items?
a. Yes. However, until responses to the newly-worded item are analyzed in the context of
the entire set of items, it is uncertain whether the new wording changes the reliability or
validity of the measure.
b. This option can be explored in consultation with your own technical assistants, or with
consultants recommended by NCSEAM.
5. Does changing items affect reliability?
a. Qualitatively, reliability can be affected by the extent to which items are clearly worded
and consistently represent a particular amount of the thing being measured, e.g., schools’
facilitation of parent involvement or family outcomes resulting from early intervention.
Use of items that are ambiguously phrased, that ask multiple questions in association
with only one response opportunity, or that vary inconsistently in their agreeability across
respondents could have a negative effect on reliability.
b. Quantitatively, reliability is reduced as the number of items (or response choices) is
reduced. Fewer items result in a reduced capacity to distinguish differences among the
respondents. The end result is a higher error and less precision in the percents reported on
the SPP/APR parent/family indicators.
6. If we change items, will results for our state be comparable to those of states using other items?
a. Yes. Provided that items are modified, removed, or substituted following appropriate
measurement requirements, the comparability of different version of the survey will be
preserved.
7. Why is there no N/A or I Do Not Know option?
a. The first reason is that the instructions at the beginning of the survey tell respondents to
skip any items that they feel do not apply to them or to their child.
b. The second reason is that including these kinds of options can significantly decrease the
number of items that people give a substantive response to. When a response to an item
requires some deliberation, some respondents may tend to choose a N/A or Don’t Know
option rather than think through the other response choices and make a decision.
c. The extra effort that it takes respondents to decide to skip a question is small enough to
maintain data quality, but high enough to maintain data quantity.
8. How are measures from the different NCSEAM scales related to one another?
a. For Part C:
i. Based on data from the National Item Validation Study, 64% of the variance in
the Impact on Family measures is explained by the Family-Centered Services
measures;
ii. when the Impact on Family measures are divided into the five statistically distinct
ranges the scale can reliably distinguish, 52% of the Impact on Family measures
are accurately predicted by the Family-Centered Services measures;
iii. the Family-Centered Services measures predict 92% of the Impact on Family
measures to within one range plus or minus the actual range.
b. For Part B:
i. Based on data from the National Item Validation Study, when measures of
Schools’ Partnership Efforts are examined in relation to each of the other Part B
scales,
1. Schools’ Partnership Efforts explains 13% of the variation in the Parent
Participation measures;
2. Schools’ Partnership Efforts explains 89% of the variation in the Quality
of Services measures;
3. Schools’ Partnership Efforts explains 63% of the variation in the Impact
on Family measures;
ii. When the Schools’ Partnership Efforts measures are divided into the seven
statistically distinct groups that the scale can reliably distinguish,
1. 62% of the measures fall into the range predicted by the Quality of
Services measures; .
2. 33%, by the Parent Participation measures;
3. 46% by the Impact on Family measures;
4. 64% by all of the three measures combined; and
iii. The three measures combined predict 96% of the Partnership Efforts measures to
within one range plus or minus the observed category.
9. What is the value of using all of the scales rather than just one?
a. The relationships among the scales (School and Program Efforts, Parent Participation,
and Impact on the Family) can guide program improvement efforts. For example, the
extent to which parents report that preschool special education services resulted in
positive outcomes for their family can be related to parents’ reports of the extent to which
schools facilitated their involvement.
b. Increased efforts to facilitate parent involvement should result in greater parent
participation as well as improved outcomes both for children and families.
10. Can the NCSEAM Impact on Family scale be used to address the ECO Family Outcome
statements?
a. Yes. A measure derived from the NCSEAM Impact on Family Scale can support
inferences regarding the extent to which families are achieving the outcomes specified in
the ECO Family Outcome statements. See a related document posted to this website.
11. Can states adopt a standard that is different from the one recommended by NCSEAM?
a. Yes. NCSEAM recommends that states wishing to do this implement the standard-setting
procedure as described in an accompanying document posted to this website.
12. If we adjust items, does this affect application of the standard?
a. Not necessarily. If the validity and reliability of the measures are not compromised, the
scales will be in the same metrics and the percentage values reported on the SPP/APR
will be comparable with those derived from other versions of the survey.
b. Reliability cannot be ensured if the number of items used is smaller than that
recommended by NCSEAM. The consequence of lower reliability will be less confidence
in the percent reported for the SPP/APR parent/family indicators.
c. Decreased confidence in the percent reported for the SPP/APR translates, over time, into
greater uncertainty as to whether improvement efforts are having the desired effect.
13. How were the NCSEAM items developed?
a. The NCSEAM Parent/Family Involvement Workgroup was convened in early 2002 for
the purpose of developing parent measures for use in accountability systems for early
intervention and special education, including preschool special education
b. Sample items were drawn from existing survey instruments, research on parent
involvement, and descriptions of best practices in parent involvement and familycentered services.
c. In 2003, stakeholder workgroups were conducted in 6 states (MS, NH, CA, NM, KY,
FL).
d. About 500 suggested items were reviewed by PACER and other parent groups.
e. Data from several states’ (FL, CT, MS, MI, NY) surveys were analyzed; responses of
parents and families to similar items on different surveys consistently showed similar
degrees of agreement. Separate calibrations for items from different surveys having
similar content were highly correlated.
f. The NCSEAM National Item Validation Study was conducted between October 2004 and
February 2005 through the efforts of 8 Part C Lead Agencies, 6 SEAs, and many
cooperating parent organizations. Data analyses related to this study are available in the
Technical Manual posted on this website.
14. How were the items for the NCSEAM-designed 2005 survey chosen from of the item bank?
a. Qualitatively, items were chosen on the basis of face validity and content validity, in
consultation with stakeholders and additional parent/family representatives.
b. Items were also chosen on the basis of simplicity, brevity, and the consistency of the
responses they garner.
Quantitatively, items calibrate to a wide range of positions on the various measurement rulers. These
positions reveal differences in the amount of agreement indicated by parents. Items were then also
chosen so as to span the entire range of measurement, which is a significant factor in maintaining
measurement reliability.
15. What are some reasons for adopting the NCSEAM surveys as a state’s measurement tool for the
SPP/APR indicator(s)?
a. The NCSEAM surveys are scientifically-based, valid and reliable.
b. The NCSEAM measurement system consists of items suggested by parents and families
that have been validated by data provided by parents and families.
c. The NCSEAM-recommended standard were set by a national stakeholder group.
d. The NCSEAM scales provide a map for program improvement
e. Measures on the different NCSEAM scales reveal important associations between
improvement in services and improvement in outcomes for children and families.
16. What is the process for analysis of the data?
a. States may use their own data analysts or contract with a vendor listed on the NCSEAM
website or with a measurement consultant or firm of their choice.
b. Technical assistance for conducting the appropriate analyses are available in the
NCSEAM Technical Manual.
i. WINSTEPS measurement analysis control files set up to be used with the
NCSEAM-designed surveys
ii. WINSTEPS item anchor value files
iii. WINSTEPS data and output files from the item validation study
17. When does baseline data have to be reported to OSEP?
a. February, 2012
APPENDIX E
Standard Setting for Use of the NCSEAM Measurement to Address the SPP/APR Parent Family
Indicators
Standard Setting for the Use of the NCSEAM Measures to Address the SPP/APR
Parent/Family Indicators
Rationale
Rigorous measurement instruments yield consistent measures reportable in a uniform metric. This fact
allows the meaning of the measures to e interpreted similarly by all users. However, the question of
whether a particular measure (score) obtained through application of the measurement tool is adequate
for a particular purpose should be determined by those who hold a stake in the consequences of using
the measurement system.
There are many examples of standard setting using well-known measurement tools. For example,
colleges often set a particular SAT score as a minimum requirement for admission. States establish
scores on their state-wide public school tests that represent different levels of proficiency.
Use of the NCSEAM instruments to address the parent/family indicators requires the determination of a
standard. For Part B, the standard is defined as the measure ast which there is adequate evidence of
schools’ facilitation of parent involvement. For Part C, the standard is defined as the measure at which
there is adequate evidence of families’ achievement of specific outcomes.
In July 2005, NCSEAM convened a national group of stakeholders including parents, state Part B and
Part C directors, advocates, service providers, and researchers, to recommend standards for the Part B
and Part C indicators. Their recommendations are reported in the NCSEAM Summer Institute Plenary
Session presentation.
Procedure
The standard setting process implemented by NCSEAM was a modification of the process described in
Stone, G>E> (2001). Objective Standard Setting (or Truth in Advertising), Journal of Applied
Measurement, 2(2), 187-201.
•
•
•
•
•
•
Convene a workgroup with broad representation of families, state and local agencies, advocates,
and other key stakeholders.
Distribute a list of all items constituting the scale for which a standard is to be set. The items
should be in their calibration order from lowest (greatest amount of agreement) to highest
(lowest amount of agreement). The items will have been scaled such that the item calibrations
represent a combined .95 likelihood of a response across the three agree categories (agree,
strongly agree, very strongly agree).
Reach consensus as to the highest item with which participants would require an “agree”
response in order to have confidence that the meaning of the indicator (e.g., schools are
facilitating parent involvement) is being achieved. Descriptively, “If families don’t agree with
this item” – and, by implication, with all those below it – “then we could not say that we had
acceptable quality in this area.”
The measure that corresponds to the selected item – or items, when several items are in the same
statistical range - represents the standard.
Performance on the indicator is calculated as the percent of parents or families with measures at
or above the established standard.
To take measurement error into consideration, construct a confidence interval around the percent
based on the estimate of measurement error. We will then have 95% confidence that the true
percent of parents at or above the measure is within this % interval.
File Type | application/pdf |
Author | pondsp |
File Modified | 2010-07-13 |
File Created | 2010-07-13 |