Pilot Test Report

NAGB_OMB-Attachment_A-Pilot_test_report_(1-10-11).docx

Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions (Formerly titled Survey of Placement Tests and Cut-Scores in Higher Education Institutions).

Pilot Test Report

OMB: 3098-0006

Document [docx]
Download: docx | pdf



National Assessment Governing Board Survey on Evaluating Student Need
for Developmental or Remedial Courses at Postsecondary Education Institutions

Pilot Test Report


January 2011

Prepared for:

National Assessment Governing Board
800 North Capitol Street, NW
Suite 825
Washington, DC 20002

Prepared by:

Liam Ristow and Basmat Parsad

Westat

1600 Research Boulevard

Rockville, Maryland 20850-3129

(301) 251-1500


Contents

Section Page


Executive Summary v


1 Introduction 1


2 Pilot Test Findings 3


2.1 Findings Related to Survey Content and Data Provided by Responding Institutions 3


2.1.1 Survey Definitions and Instructions 4

2.2.2 Findings on How Reported Tests and Scores Are Used 6


2.2 Findings Related to Survey Methodology 9


2.2.1 Survey Administration and Identifying Respondents 9

2.2.2 Response Rates 12

2.2.3 Measures Taken to Maximize Response Rates 16

2.2.4 Implications for Sampling Approach for Full-Scale Data Collection 19


3 Recommendations and Decisions 22


3.1 Recommendations for Changes to the Questionnaire 22


3.1.1 Clarifying the Correct Test and Score to Report 22

3.1.2 Changes to Test Lists 23

3.1.3 Use of Comment Boxes 24


3.2 Recommendations for Survey Methodology 25


3.2.1 Identify Appropriate Respondents 25

3.2.2 Maximize Survey Response Rates 26

3.2.3 Maximize Item Response Rates and Data Quality 26

3.2.4 Time Needed to Complete the Survey 27

3.2.3 Implications for Full-Scale Data Collection Costs 27


Appendix


A Summary of Commonly Reported Tests and Mean Scores A-1


B Detailed Tables of Estimates for Survey Data B-1


Table


1 Pilot test schedule showing focus of survey activity and response rates 17


2 Distribution of completed questionnaires, by status of data problems 22


3 Pilot sample extrapolated to full-scale survey 24


4 Expected sample sizes and levels of precision using on eligibility rates and response rates from the Pilot Study (Revised OMB Table B-8) 25


Executive Summary



This report describes findings from the pilot test of the National Assessment Governing Board’s (NAGB) survey on the tests and cut-scores used by postsecondary education institutions to evaluate entering students’ need for developmental or remedial courses in mathematics and reading. Westat conducted the pilot test in fall 2010 with a random sample of 120 postsecondary education institutions and an overall response rate of 86 percent. The pilot test was designed to explore four specific objectives relating to questionnaire issues and potential hurdles for survey administration. Below is a summary of key findings, recommendations, and decisions for each of these objectives as well as other issues that that emerged from survey responses and interviews with selected survey respondents. A detailed discussion can be found in section 2 of this report.




Objective 1: Evaluate strategy of different questionnaires for two-year and four-year institutions

In general, the strategy of different questionnaires with different instructions for two-year and four-year institutions was useful, given that these institutional types have different educational missions and different academic program structures. While interviews with pilot test respondents found no problems with the approach of different questionnaires for two- and four-year institutions, some problems were identified with the instructions themselves. For example, some four-year institutions had difficulty with the instruction to report tests and scores used to evaluate students in “liberal arts and sciences” programs because the criteria used to evaluate students in sciences programs were more stringent than those used to evaluate students in liberal arts programs.


Based on the recommendations offered, the following decisions were made to address issues related to pilot test objective 1 as well as additional issues that emerged from testing this objective.


  • The full-scale survey will retain the approach of different questionnaires with different instructions for two-year and four-year institutions.


  • An example test score scale was added to the questionnaire to clarify the test score that respondents should report for question 2. This scale provides information on how to deal with test scores for mathematics placement into academic programs with advanced skills requirements (e.g., engineering, physics, and mathematics programs).

  • Instructions were added to the comment boxes to encourage respondents to explain how test scores were used to evaluate student need for remediation. Respondents completing the web survey will also be prompted to provide comments.


  • As in the pilot test, survey responses will be carefully reviewed to identify potential errors and conduct followup with respondents as needed for the full-scale survey.



Objective 2: Evaluate survey instructions intended to address variable scoring systems

Institutions reported no problems with the instruction to report the highest score if different scores were used to either recommend or require students for remedial or developmental courses. However, institutions reported problems with the instruction to report scores based on the highest level of remedial course if different scores were used to evaluate students for different levels of remedial courses. About 12 respondents misreported mathematics scores used to place students into lower levels of remediation, although it was clear from telephone interviews with the respondents that the problem was often due to simple oversight rather than misinterpretation.


Based on the recommendations offered, the following decisions were made to address issues related to pilot test objective 2 as well as additional issues that emerged from testing this objective.


  • The order of the bulleted instructions for questions 2 and 6 were reversed so that the instruction on reporting scores for the highest level remedial courses would become more prominent to the reader.


  • As described under objective 1, an example test score scale was added to the questionnaire to clarify the test score that respondents should report for questions 2 and 6. The scale clearly shows that the respondent should not report scores for low-level remedial courses.


  • Instructions were added to the comment boxes to encourage respondents to explain how test scores are used to evaluate student need for remediation. Respondents completing the web survey will also be prompted to provide comments.


  • As in the pilot test, survey responses will be carefully reviewed to identify potential errors and conduct followup with respondents as needed for full-scale survey. For example, respondents reporting more than one subtest will be contacted for clarifying information.



Objective 3: Assess completeness of test lists

Write-in reports of “other” placement tests not appearing on the lists of tests suggested that no additional tests need to be added to the questionnaire. However, some respondents mistakenly reported tests used to evaluate students’ preparedness for courses above entry level, while others reported tests used to place students into remedial writing courses. These findings suggested that some tests should be removed from the questionnaire.


Based on the recommendations offered, the following decisions were made to address issues related to pilot test objective 3 as well as additional issues that emerged from examining the tests reported by respondents.


  • Tests with zero or very low frequencies were dropped from the lists in questions 2 and 6. If respondents use any test that is not included in the list for the full-scale survey, they will report the test under “Other tests” or in the comment box provided. Since this is expected to be a rare occurrence, it will be relatively easy to conduct followup with these respondents to obtain additional information as needed.


  • Specialized mathematics tests (i.e., tests used to place students into mathematics courses above entry level) were removed from question 2. This will help to minimize the misreporting of tests for mathematics placement.


  • Writing tests were dropped from question 6 since these tests could lead to misreporting of writing tests that do not include a focus on reading skills. Again, if respondents use any test that is not included in the list for the full-scale survey, they will report the test under “Other tests” or in the comment box provided.


  • The lower-level mathematics tests were retained on question 2 although only a small number of institutions are likely to use these tests. Institutions that report both lower-level and upper-level tests (i.e., more than one subtest) will be contacted during followup calls to clarify the data.



Objective 4: Evaluate procedures to identify the appropriate survey respondent

Interviewers faced serious challenges in navigating the office of the president or chancellor to verify receipt of the survey and identify a respondent. For example, about half of the sampled institutions had either not received the survey package or could not verify receipt at the time of the initial phone call from Westat interviewers. Among those that confirmed receipt of the package, about one-fourth listed several places where the package could have been sent. Interviewers also experienced difficulties in making initial contact with designated respondents, primarily because they had to go through gatekeepers (e.g., secretaries or administrative assistants) to communicate with respondents. In about half the cases, interviewers could not call respondents directly. These findings suggest a higher level of effort than originally assumed for the full-scale data collection.


Based on the recommendations offered, the following decisions were made to address issues related to pilot test objective 4 and other issues from survey administration.


  • Given the relatively high frequency with which institutions may dump survey materials, we will personalize the envelopes and cover letters by including the name of an institution’s president and the NAGB logo on envelopes containing survey materials. This information will also be included in the cover letters for full-scale survey. In addition, in response to requests from pilot test respondents for additional information about the Governing Board, we will include a page of information about the Governing Board and how this study fits into its overall research program.


  • We will implement a schedule that is similar to the pilot test in which the initial weeks of interviewer followup will be dedicated to tracking the status of the survey package and identifying appropriate respondents. As part of this process, we will use multiple methods (e.g., mail, fax, email) to provide institutional contacts with survey materials as quickly as possible. We will also include additional information in the survey materials about who the appropriate respondent might be.



Conclusion

The pilot test was useful in assessing the feasibility of the full-scale data collection. The preceding sections and full report that follows provide information on what was learned from the pilot test, including findings that relate to the pilot test objectives and other questionnaire and methodological findings. Based on survey responses and additional information from interviews with selected respondents from the pilot test, this report describes both recommendations and changes made (or planned changes) to address those issues. Changes to the questionnaire and survey materials address issues such as the strategy of different questionnaires and instructions for two-year and four-year institutions, survey instructions on how to report test scores, and relevance of the test lists. These changes will help to maximize the quality of the data collected. In addition, implemented and planned data collection approaches will be more efficient in identifying appropriate respondents and in maximizing both response rates and data quality. Thus, we do not foresee any ongoing substantive or methodological concerns that will compromise the quality of survey data or hinder the successful implementation of the full-scale survey within current budget.


Introduction

1

Westat conducted a pilot test of the National Assessment Governing Board’s (NAGB) survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions in fall 2010. The pilot test was designed to assess potential issues related to questionnaire content as well as potential hurdles for survey administration. In earlier survey development work, some of these issues persisted as possible threats to a full data collection even after multiple revisions were made to the survey questions and instructions.


The pilot test aimed to explore four key issues that emerged from earlier survey development. They included potential challenges for the questionnaire design and content as well as issues related to survey administration. The four main objectives of the pilot test were as follows:


  • Objective 1: Evaluate strategy of different questionnaires for two-year and four-year institutions

  • Objective 2: Evaluate survey instructions intended to address variable scoring systems

  • Objective 3: Assess completeness of test lists

  • Objective 4: Evaluate procedures to identify the appropriate survey respondent

In addition to examining these four objectives, the pilot test was intended to explore any other questionnaire or methodological issues that emerged from survey responses or interviews with selected respondents.


The pilot test was conducted with a random sample of 120 postsecondary institutions, using the same sampling design, survey instruments, and administration procedures as would be needed for the full-scale survey. The pilot test survey was conducted through an initial mailing to the office of the president or chancellor of sampled institutions followed by telephone calls to identify appropriate survey respondents and encourage participation. Respondents could complete the survey online or by mail, fax, or email. Table 1 shows the number of sampled institutions and final survey response rates broken out by type of institution. The overall response rate for the pilot test was 86 percent.



Table 1. Sampled institutions and response rates, by institution type


Institution type

Number of institutions in sample

Number of ineligible institutions

Response
rate





   All institutions

120

8

86%





Public 2-year

30

0

90

Private not-for-profit 2-year

10

1

89

Private for-profit 2-year

10

4

33

Public 4-year

30

0

90

Private not-for-profit 4-year

30

1

93

Private for-profit 4-year

10

2

63

Note: The response rates are based on the number of eligible institutions (i.e., sampled institutions minus ineligible institutions). For example, of the 9 private not-for-profit 2-year institutions that were eligible for the survey, 8 institutions or (89 percent) completed the survey.


Section 2 of this report presents findings from the pilot test, including findings related to the four pilot test objectives, other findings that emerged from survey responses or interviews with selected respondents, and findings relating to survey methodology. Section 3 outlines recommendations and decisions (including changes made or planned changes) for both the questionnaire and survey methodology for a full-scale survey administration.


Pilot Test Findings

2

This section of the report presents pilot test findings that relate to questionnaire content and methodology. The findings are discussed in relation to the four pilot test objectives outlined in the introduction as well as additional issues that emerged from survey responses and followup interviews with selected respondents.


A discussion of the commonly reported tests and mean scores are presented in Appendix A while detailed tables are provided in Appendix B. It is important to note that the pilot test findings on tests and test scores findings were not intended to be used as prevalence estimates. Thus, the data in Appendixes A and B are not appropriate for public release and should not be used for any purpose other than an initial look at potential anomalous results.



2.1 Findings Related to Survey Content

This section describes findings on the first three pilot test objectives as well as other findings related to questionnaire content. Data sources for identifying findings included analysis of survey responses, including respondent entries in comment boxes, and 19 follow-up interviews with selected respondents. Respondents identified for follow-up interviews were chosen from a pool of 38 institutions that were identified as needing follow-up to deal with complex data problems. For example, institutions reporting unusual combinations of tests and scores or comments that suggested misreporting or misinterpretation were identified as problem or potential problem cases. For some problem cases, comments provided in questionnaire comment fields were sufficient to make adjustments to the data. In other cases, respondents did not provide a comment or the comment did not fully explain the potential data problem. Respondents selected for interviews were primarily from this more ambiguous group.



2.1.1 Survey Definitions and Instructions

Findings for Objective 1: Evaluate strategy of different questionnaires for two-year and four-year institutions

Two versions of the questionnaire were fielded in the pilot test—one for two-year institutions and one for four-year institutions. The questionnaires differed only in one key aspect within the instructions provided. Two-year institutions were instructed to report based on academic programs designed to transfer to a four-year institution, while four-year institutions were instructed to report based on academic programs in the liberal arts and sciences. These instructions were intended to instruct respondents to report based on their “general” student populations. Given the problems encountered in earlier survey development work, assessing the use of different sets of instructions for two- and four-year institutions was a key objective of the pilot test.


In general, the strategy of using different questionnaires with different instructions for two- and four-year institutions was useful, as these institution types have different educational missions and different academic program structures. Prior survey development work found that both two-year and four-year institutions would benefit from instructions tailored to their particular programmatic structures, and interviews with pilot test respondents found no problems with this general approach. However, some problems were identified with the instructions themselves, particularly in relation to reporting mathematics test scores. For example, one two-year institution reported that the test scores used to evaluate student need for remediation can vary within different types of transfer programs. This institution uses one set of mathematics scores for students enrolled in non-mathematics intensive transfer programs such as history, and a higher set of mathematics scores for students enrolled in science or mathematics transfer programs. Thus, the institution was unable to report a single set of scores that applied to all students enrolled in “programs designed to transfer to a four-year institution” as specified on the questionnaire. In another similar situation, the two-year institution reported both scores as valid for the different transfer programs.


Five four-year institutions appeared to have difficulty with the instruction to report scores used for “liberal arts and sciences” programs. For example, two of these institutions reported mathematics scores used only for students enrolled in technical programs such as science, mathematics, and statistics. Another four-year institution reported reading scores used only to evaluate students enrolled in the institution’s nursing program. The respondent from this institution noted that liberal arts and sciences covered a broad range of study areas, from sociology and history to biology and physics, making the instruction somewhat difficult to apply. Another respondent from a four-year institution reported that the institution uses two different institutionally developed mathematics placement tests, one for students enrolled in liberal arts programs and another used for students enrolled in sciences programs.


Findings for Objective 2: Evaluate survey instructions intended to address variable scoring systems

Another key area of focus for the pilot test was instructions intended to address institutional placement strategies involving more than one score. For example, during earlier survey development work, we found that some institutions reported one score to recommend students for remediation and another score to require students for remediation. In addition, some institutions reported using different scores to place students into different levels of remedial courses. To address these potential problems, bulleted instructions were added to the questionnaire, both of which instructed respondents to report the highest score used if the situation was applicable to their institution.


The analysis of questionnaire comment fields and follow-up interviews suggested that the instruction on scores used to recommend students for remediation versus scores used to require students for remediation was generally not problematic. No institutions participating in follow-up interviews indicated that this instruction was confusing or provided data that were in conflict with the instruction (e.g., the reporting of the lower of two scores instead of the higher score). One respondent commented on question 6 that the institution uses different scores on the Compass exam to require or recommend students for remedial reading courses. The respondent correctly provided the higher of the two scores used. However, because follow-up was conducted with only a portion of responding institutions, it is possible that this instruction was problematic for respondents not participating in follow-up interviews.


The instruction to report the score used for the highest level of remedial course, however, appeared to be problematic for a number of respondents. Twelve respondents listed scores that were used to place students into lower-level remedial or developmental mathematics courses. One respondent reported scores used to place students into lower-level reading courses. Of the respondents that made errors and participated in interviews, it was clear that the problem was often due to simple oversight rather than misinterpretation. For example, respondents appeared to have reported scores for any test they used in placing students into remedial courses without carefully reading the instructions. However, one respondent noted in a follow-up interview that he interpreted the instruction to mean that he should report the highest score for each level of remedial course offered by the institution.


Other findings on survey definitions and instructions


Reporting scores “below which” students are placed into remediation

Questions 2 and 6 instruct respondents to report the scores “below which” students were in need of remedial or developmental mathematics or reading courses. In a review of survey comment field responses and feedback in follow-up interviews, it was determined that four institutions reported scores that did not meet this criterion. All four of these institutions reported a score “at or below which” students were identified as in need of remediation. In some cases this misreporting appeared to be due to misinterpretation of the question instructions. For example, one respondent from a four-year institution said that she found the instruction confusing and recommended referring to a “cut score” instead. Another respondent suggested asking for the score at which students are placed into remediation.




2.1.2 Findings on How Reported Tests and Scores Are Used

Findings for Objective 3: Assess completeness of test lists

A key goal of the pilot test was to assess the completeness and relevance of the lists of tests appearing in question 2 (mathematics) and question 6 (reading). For both questions, write-in responses of “other” tests not appearing on the list suggest that no additional tests need to be added. Of the 25 other mathematics tests reported by all of the institutions, none were reported by more than one institution (i.e., none were repeated). The majority of these tests were described as developed by the institution itself, but individual institutions reported using the Texas Assessment of Knowledge and Skills (TAKS), the Texas Higher Education Assessment (THEA), the Assessment of Learning in Knowledge Spaces (ALEKS), and the Test of Essential Academic Skills (TEAS). Similarly, of the six other reading tests reported in question 6, only two were not institutionally developed test (the TAKS and THEA were reported by one institution each).


As noted above, a number of pilot test respondents provided scores for mathematics tests used to identify students for lower-level remedial courses or to place students in courses above entry level. In the former case, the tests reported were typically those meant to assess very basic skills, such as the Accuplacer Arithmetic test or the Compass Pre-Algebra placement domain. In the latter case, tests commonly reported included those designed as assessments of more specialized mathematics skills, such as the Compass Geometry and Trigonometry exams. Follow-up interviews suggested that these respondents simply gave scores for any test used as part of the placement process, even if the test was not used as the cut point between remedial or developmental courses and entry-level courses. Retaining tests designed to assess higher level or lower-level skills may therefore encourage some degree of respondent error.


Additionally, two institutions reported scores for writing tests that were not used to place students into reading-focused remedial courses. One of these institutions noted in a follow-up interview that they had overlooked the instruction on when to report scores used for placement into writing courses and reported the test because it appeared on the list. This again suggests that some respondents may simply report scores for any test appearing on the list that they use in their placement process, even if it does not meet the study’s criteria of eligible tests and scores.


Other related findings on how tests and scores are used


Tests and scores used to place students in courses above entry level

Seven institutions reported scores used to place students in advanced mathematics courses above entry level. For example, one two-year and one four-year institution reported scores on the Compass Trigonometry placement domain that were used to place students into courses above the institution’s entry-level mathematics course. In both cases, the error appeared to be due to the respondents simply selecting each test used by the institution for placement purposes, including placement above entry-level mathematics courses. In another case, the institution reported only the scores used to place students into math courses that were entry level or above. During the follow-up for data clarification, the respondent indicated that the institution does not offer any remedial courses and she had not realized that the survey was asking only about placement into remedial courses.


Tests and scores used to exempt students from placement testing or identify students in need of placement testing

Several institutions participating in survey development reported using test scores (often SAT or ACT) to either exempt students from further placement testing or to identify students in need of placement testing. In such cases, the test score is not used as the deciding factor between placing a student in remediation or allowing a student to enroll in entry-level courses. In the pilot test, eight institutions reported mathematics tests and two institutions reported reading tests used in this fashion. For example, one two-year institution reported mathematics ACT and SAT scores below which students were required to take the Compass placement test and above which students were exempt from taking the Compass test. Although this institution uses ACT and SAT as part of the placement process, their final placement determination between remediation and entry-level courses is made using the Compass score, rather than the ACT or SAT. Another respondent (from a four-year institution) reported mathematics scores for both the ACT and the SAT that were used to identify students that needed to take the institution’s own mathematics placement test, which was the final determinant of placement.


Institutional placement system does not allow reporting a single test score

Two institutions were not able to report a test score due to use of an evaluation system that takes multiple scores into account. For example, one four-year institution reported a “compensatory model” based on logistic regression analysis that combines a student’s ACT mathematics score with the score on the institution’s own placement test to determine mathematics placement. According to the respondent, this means that “it is not possible to list one score on either test below which remedial courses are needed.” Another respondent from a four-year institution was unable to provide a single Accuplacer mathematics score because the institution averages Accuplacer subtest scores to determine students’ placement. These responses suggest that some institutions with complex placement systems will be unable to provide a complete survey response.


Additionally, one respondent from a four-year institution noted that students are placed into remediation by scoring below a certain level on the SAT or ACT and below a certain level on the institution’s own mathematics placement test. While the respondent provided scores for the ACT and SAT, it is important to note that these do not completely represent the criteria used by the institution to assess student need for remedial or developmental mathematics courses. It is possible that institutions such as this one may be unwilling to report scores for the full survey administration given that no single score represents their policy.


Writing tests not used to place students into reading-focused courses

In earlier survey development work, some institutions indicated that remedial writing courses can be used to address students’ reading deficiencies. Based on this finding, writing tests were added to question 6 and respondents were instructed to report evaluation for developmental writing courses only if the course had a “substantial focus on improving writing skills.”


Based on review of comment fields and responses from follow-up interviews, two institutions were found to have reported scores for writing tests that were used to place students into remedial writing courses that did not have a significant reading focus. For example, one two-year institution reported a score for the Compass writing skills placement domain that was used to place students into a remedial English composition course with no focus on improving reading skills. When asked about this in a follow-up interview, the respondent indicated that she had not noticed the instruction to report only tests used to place students into reading-focused courses.




2.2 Findings Related to Survey Methodology

2.2.1 Survey Administration and Identifying Respondents

Survey packages were mailed to the 120 sampled two-year and four-year postsecondary institutions on September 17, 2010. The package contained a questionnaire and a cover letter with information about the survey and instructions for accessing the survey online. Westat conducted Interviewer training on September 22, and the interviewers started telephone follow-up for survey nonresponse and data clarification on September 23.


Follow up lasted about five weeks and it occurred through multiple modes of data collection, including interviewer telephone calls and emails as well as mass email prompts for nonrespondents to complete the survey. Based on the level of effort in earlier survey development work to navigate institutions and identify appropriate respondents, interviewer follow-up activities were organized into three phases:


  • Initial calls to the presidents’ offices to ascertain receipt of the survey packages and identify appropriate respondents;

  • Calls to respondents to confirm receipt of the survey package and encourage cooperation; and

  • Calls to respondents to obtain missing data or clarify inconsistencies in the data.


In addition to interviewer follow-up for survey nonresponse and data clarification, Westat project staff conducted telephone interviews with selected respondents whose survey responses revealed problems in the interpretation of questionnaire items or in being able to provide test score data.


Data collection for survey nonresponse ended on October 29, while telephone interviews for data clarification continued for an additional week.


Findings for Objective 4: Evaluate procedures to identify the appropriate survey respondent

Based on earlier survey development, a major challenge expected for survey administration was navigating institutions in order to identify appropriate survey respondents. As a first step in this process, initial follow-up calls were made to the president’s or chancellor’s office to confirm receipt of the survey package, identify where the materials were sent for completion, and provide assistance in identifying appropriate respondents or offices when necessary. Interviewers were required to document the process of identifying appropriate respondents for each institution using a standardized form.


Challenges in navigating the presidents’ offices

Interviewers found that it took considerable effort to confirm with someone at the president’s office whether the institution had received the package and to determine where it had been sent. If the interviewer could not confirm receipt of the survey package, the first step was to resend it by fax or email to the president’s office and/or the appropriate respondent, if he/she was identified. While these tasks took an average of four phone calls for two-year institutions and five phone calls for four-year institutions, these averages are deceptively low. The number of calls ranged from one to 12 for two-year institutions and one to 10 for four-year institutions. Nearly 25 percent of two-year institutions required between nine and 12 phone calls, and more than 25 percent of four-year institutions required between six and 10 phone calls. It should be noted that these counts do not include additional calls to encourage respondents to complete the survey, merely those calls to confirm that an institution had received the package and to identify who the best respondent would be.


A review of the documentation by interviewers revealed the following as the most frequent challenges in tracking survey packages and identifying appropriate respondents:


  • Incorrect mailing addresses. In a couple of cases, the mailing address in the IPEDS data was incorrect. In a few other cases, the IPEDS address did not correspond to the location for the president’s office where the package should be sent. These wrong addresses meant that the survey package was delayed in reaching the president’s office or did not reach it at all.

  • President’s office could not verify receipt of the survey package. About half of the institutions either had not received the package or could not verify its receipt. In some cases, the president’s office did not have well-kept records of packages received and sent out. In tracking down the package, interviewers were often asked whether the envelope was addressed to the president by name, and they were told that this could make a difference to the attention the package received. Even when it was confirmed that the package was received, the office often could not recall where it was sent. In about one-fourth of the cases, the president’s office listed several places where the survey package might have gone (e.g., admissions, student services, institutional research, or the assessment office). As a result, interviewers had to navigate through several offices, identify the appropriate respondent, and send a new survey package to that respondent.

  • Institution was uncertain about what to do with the survey package. In about 10 percent of the cases in which interviewers were able to confirm with someone at the president’s office that the package was received, that person did not know where to send the package. In a few cases, the interviewers were told that the package may have been dumped because of this uncertainty. For example, the secretary of one institution whose president was frequently out of the office dumped the survey because the office could not determine the best respondent. In about 15 percent of the cases where the president’s office had identified an appropriate respondent, interviewers found that the designated respondent had to forward it to another office or individual who would be more knowledgeable about the survey topic.


While institutions varied widely in identifying the best respondent, there were a few patterns. At two-year institutions, the respondent was often in the assessment, admissions, or enrollment office. Four-year institutions also frequently identified respondents in these offices as well as the institutional research office.


Challenges in contacting respondents and encouraging participation

When the president’s office had identified a respondent, interviewers often had to try many times to speak with that respondent, or as indicated above, they had to further navigate the institution’s system before finding the best respondent.


  • Multiple calls to contact respondents and difficulty getting past gatekeepers. For about 80 percent of the cases, the interviewers had to call two or more times in order to speak with respondents, resulting in an average of four calls (in addition to emails) to contact respondents and an additional five calls, on average, to encourage respondents to complete the survey. Often, secretaries said they would have the president or appropriate respondent call the interviewer back, but no one did. Some institutions pushed the survey deeper into the institution’s bureaucracy, sending it to the provost or another senior person who did not have the time to respond. In addition, some designated respondents were out on vacation. In about half of the cases, the president’s office did not give the direct contact information of the respondent, so the interviewer had to repeatedly talk to a secretary or other gatekeeper to encourage completion of the survey.

  • Multiple campuses and external entities. In one case, an institution wanted to send the survey to the main campus even though their satellite campus was selected. The satellite campus could not tell the interviewer where in the main campus the survey should go, thus requiring additional interviewer time to obtain responses. Another institution followed guidelines set by an external entity on behalf of the entire university system. Interviewers had difficulty finding someone in this external group who could respond to the survey.

  • Not familiar with the National Assessment Governing Board. In about one-third of the cases, institutions were reluctant to participate in the survey because they were not familiar with the National Assessment Governing Board. The institutions asked questions such as, What is the National Assessment Governing Board? Why are they doing this study? What are they going to do with the information collected? After using the information in the training manual to provide a description of NAGB over the phone, some interviewers found it helpful to send additional information via email. For example, in addition to providing the short description of NAGB in the Frequently Asked Questions at the end of the questionnaire (and the interviewer manual), a few interviewers faxed a page of information about NAGB from the website http://www.nagb.org/what-we-do/board-works.htm#responsibilities and a page about the National Assessment of Educational Progress (NAEP) from the website http://nationsreportcard.gov/about.asp. The explanation and additional background information generally helped to elicit cooperation.

A note about private for-profit institutions. Interviewers generally had more difficulty navigating private for-profit institutions to locate the survey package, identify appropriate respondents, and elicit cooperation. In a few cases, the interviewer had to communicate strictly with the corporate office. In one case, it was difficult to locate the person identified as the appropriate respondent at the corporate office, while the interviewer was passed around the corporate office at another institution because no one knew who should respond. At another for-profit institution, the director never returned calls and was never available when the interviewer called. After numerous calls to another corporate office, the interviewer was told that the institution would not be able to participate in the survey.


Effective strategies. While working through the many challenges to find the best respondent, interviewers identified several helpful strategies. They emphasized the importance of working with the executive secretary in the president’s office; this gatekeeper could often identify where to send the package or could ensure that the president saw the introductory letter. Interviewers also found it useful to send an email with information about the National Assessment Governing Board rather than only describing the organization over the phone. Finally, interviewers recommended calling back numerous times until they could reach the right person; the general agreement was that waiting for an institution to call back was rarely fruitful.



2.2.2 Response Rates

Of the 120 sampled institutions, eight were ineligible for the survey because they checked the directions box on the questionnaire (table 2). This left a total of 112 eligible institutions. Questionnaires were received from 96, or 86 percent of eligible institutions. Of the 96 institutions that completed the survey, 85 percent completed it on the Web, 6 percent completed it by mail, and the remaining 9 percent completed the survey by phone, fax, or email.


Response rates varied by type of institution; the response rate ranged between 89 to 93 percent for the two-year and four-year public and private not-for-profit institutions (table 2). Private for-profit institutions lagged behind (33 percent for two-year and 63 percent for four-year institutions).



Table 2. Pilot test schedule showing focus of survey activity and response rates


Focus of survey activities

Period/
dates

Status report date

Response rates


Overall

Public
2-yr

Not-for-profit
2-yr

For-profit 2-yr

Public
4-yr

Not-for-profit 4-yr

For-profit
4-yr

Survey mailout

9/17

NA

NA

NA

NA

NA

NA

NA

NA

Interviewer training

9/22

NA

NA

NA

NA

NA

NA

NA

NA

Followup calls to ascertain status of package and identify respondent

9/23 to 10/1

10/1

22%

33%

30%

20%

10%

23%

10%

First email prompt (10/11); minimal interviewer calls

10/4 to 10/11

10/8

49

57

67

40

37

59

22

Followup calls to encourage completion of survey

10/13 to 10/15

10/15

62

70

89

40

57

69

22

Second email prompt (10/11); minimal interviewer calls

10/21 to 10/26

10/22

74

87

89

44

77

76

33

Final push to increase response rates

10/27 to 10/29

10/29

85

90

89

50

87

93

63

Final data cleaning and completion of telephone interviews

11/1 to 11/4

11/4

86

90

89

33

90

93

63

NA = not applicable.



Ineligibles. Interviewers called each of the eight institutions to confirm whether they correctly checked the directions box. Two-year institutions checked the box to indicate that they did not have any entering students who were pursuing a degree program designed to transfer to a four-year institution, while four-year institutions checked the box to indicate that they did not enroll entering students in an undergraduate degree program in the liberal arts and sciences. The following information was provided to interviewers during follow-up calls:


  • Three institutions described themselves as career-based colleges with a focus on training students for the workforce. These students received terminal degrees or certificates that are not transferrable to four-year institutions.

  • Two institutions indicated that they offered occupationally based programs only; these programs are not designed for students intending to transfer to four-year institutions. For example, one of these was a nursing school that conferred licenses to its graduating students.

  • A culinary school indicated that they did not offer any type of degree program designed for transfer to four-year institutions.

  • One institution indicated that they offered specialized programs in animation and computer application; these programs are not designed for students who wish to transfer four-year institutions.

  • One four-year institution enrolled only students who have already obtained an associate’s degree; i.e., the college does not enroll freshmen.


Refusals. Despite interviewers’ efforts, some institutions refused to complete the survey. Interviewers identified seven cases as final refusals and documented the following reasons provided by the institutions.


  • The executive assistant from the president’s office indicated that the institution was not interested in participating at this time and requested no further contact regarding the survey.

  • The president of a college declined participation on the basis that he was new to the institution. He indicated that as an attorney, he had to be cautious about the information he gives out to researchers. All attempts to reassure him about the confidentiality of survey responses and the importance of the pilot test failed to elicit cooperation.

  • One respondent completed only the respondent contact information on the survey and left the entire questionnaire blank. When contacted, he indicated that the institution did not evaluate students for developmental or remedial courses, and agreed to respond “no” to questions 1, 3, 5, and 7. However, he later called back to withdraw his responses on the basis that they were inaccurate. He also indicated that the chancellor advised that the institution should not answer any of the questions because the requested information was proprietary. The chancellor also advised that all survey requests would have to approved through a review process before any of the information could be released. As a result, the case was changed to a status of final refusal.

  • One institution declined participation on the basis that “the survey is voluntary.” This institution would only participate in surveys that are mandatory.

  • The chancellor of an institution indicated that the refusal came from the Office of Undergraduate Programs (main campus). The main office refused on the basis that the survey was a pilot test.

  • A receptionist at the respondent’s office declined participation on behalf of her boss. Despite numerous attempts, the interviewer was unable to make direct contact with the appropriate respondent.


Item nonresponse. This section reports the item nonresponse after all interviewer follow-up for missing items were completed. There was no full-item nonresponse in the pilot test after missing data were retrieved. In other words, no responding institution left a question they were eligible to answer completely blank. However, partial item nonresponse was found on question 2 (mathematics tests) and question 6 (reading tests). For both questions, some respondents reported using tests but did not provide a score for those tests. The following mathematics tests had incomplete response (the number of institutions reporting using the test without reporting a score is shown in parentheses):


  • ACT Mathematics (2)

  • SAT Mathematics (1)

  • Accuplacer Arithmetic (2)

  • Accuplacer Elementary Algebra (3)

  • Accuplacer College-Level Mathematics (3)


The following reading tests had incomplete response:


  • ACT Reading (1)

  • SAT Critical Reading (1)

  • Accuplacer Reading Comprehension (1)

  • Accuplacer Sentence Skills (1)

  • Accuplacer WritePlacer (1)

  • Compass Writing Skills placement domain (1)

  • Nelson-Denny Reading Test (3)


It should be noted that some of these institutions provided comments that indicated that they could not provide a single score because of the way their assessment procedures were organized. Respondents providing incomplete data were contacted for the missing test scores. Some respondents indicated that they had accidentally left the score field blank and provided the score (these institutions are not reflected in the counts). In other cases, incomplete responses were due to use of a placement policy that did not allow the reporting of a single score (e.g., placement based on an average of scores). Some respondents with incomplete responses could not be reached during the data clarification follow-up period.



2.2.3 Measures Taken to Maximize Response Rates

To address methodological and substantive problems encountered in earlier survey development work, we implemented data collection procedures and a schedule that allowed for (a) close monitoring of efforts to identify appropriate respondents and (b) multiple modes for follow-up for survey nonresponse and data clarification. Key strategies for maximizing response rates included indepth training of interviewers, a data collection plan that emphasized a three-stage process with an initial focus on navigating the president’s office, and the identification of appropriate survey respondents. This was followed by intensive telephone calls and mass emails to encourage respondent participation in the survey. In the third phase, interviewers focused on pushing item response rates and clarifying data inconsistencies.



In-depth preparation and training for interviewer follow-up

Having a group of highly trained interviewers who are equipped with the appropriate information and strategy for follow-up calls to institutions was a key factor in obtaining the targeted response rate. Thus, we prepared an interviewer training manual with key information about the study and questionnaire items, and conducted in-depth training to ensure that the interviewers were familiar with the methodological and substantive issues of the study. This training included lessons learned from earlier survey development work, such as problems in identifying appropriate survey respondent(s) and potential challenges for some institutions to report test scores.



Intensive follow-up to identify appropriate survey respondent(s)

Based on the difficulties encountered in earlier survey development work to identify appropriate respondents, most of the first two weeks of follow-up efforts were dedicated to calls to the presidents’ offices. These calls were focused on ascertaining receipt of the survey package and identifying the appropriate office or survey respondent to contact. Several factors were critical during this period:


  • Based on the expectation that many institutions will not be able to confirm receipt of the survey package, we ensured that there were efficient procedures and equipment in place to quickly resend survey packages by fax or email to the presidents’ offices and/or survey respondents.

  • Interviewers relied heavily on the information provided in the manual in order to provide suggestions regarding the appropriate office or survey respondent. They also used this information to navigate the institutions themselves when no clear direction was provided by the presidents’ offices.

  • During their initial contact with institutions, interviewers focused on obtaining the names, email addresses, and telephone calls of respondents. The email addresses facilitated further follow-up through mass email prompts to nonrespondents.


Encouraging participation in the survey

Interviewers found that some institutions were not willing to provide this information because they did not have all of the information and felt it was too much of a burden to consult with other individuals or offices. Some others were not familiar with NAGB and were unsure about why this information was needed and how it would be used. Others felt that the requested information was proprietary. In these “soft refusals,” interviewers were generally successful in using the information provided in the training manual to address questions and obtain cooperation. In other instances, however, it was clear that no amount of persuasion or additional information would yield any success. For example, some of those who could not be persuaded to participate in the survey clearly stated that they were not interested and asked not to be contacted any further, while some others could not even be reached after many attempts to contact them.


Having a schedule for multiple methods of follow-up for survey nonresponse was an important factor in obtaining the 86 percent response rate. Follow-up to elicit respondent cooperation first started with a brief but intensive period of calls to respondents, followed by a period of low interviewer activity during which Westat sent mass emails to nonrespondents to encourage them to complete the survey. This process was repeated a second time. As shown in table 2, the use of mass emails was effective in maintain relatively high number of completes during periods of low interviewer follow-up for survey nonresponse.



Obtaining missing data and clarifying inconsistent data

Data processing and follow-up for missing and inconsistent were major undertakings for the pilot test. These steps included reviews of numerical data and comments from text fields for both paper completes and surveys completed on the Web. It also included the preparation of scripts for interviewer for follow-up for data clarification and the use of multiple modes of follow-up for more complex data problems (i.e., through the use of calls, fax, and emails).


Of the 96 respondents, 10 indicated that they used a test but did not provide a test score, 24 left one or more other questions unanswered, and 28 provided inconsistent data or unclear comments (table 3). In some of these cases, it was possible to resolve the problem through a review of the questionnaire data or comments provided in the comment box. However, in other cases, it was necessary to follow up with the respondent to obtain clarification or missing data. For example:


  • The comments provided indicated reasons why some institutions could not provide test scores, resulting in seven of the 10 cases needing follow-up calls with respondents.

  • In most instances, other types of missing data were the outcome of the respondent either missing a question or responding only to the questions that applied to the institution. For example, some respondents answered the questions on math but left the entire reading section blank, while others selectively answered the questions that applied to the institution and left the others blank. In 21 of the 24 cases, it was necessary to conduct follow-up calls.

  • Of the 28 cases that had obvious data problems other than missing data, follow-up was needed for 21 cases because problems for the 7 other cases were resolved through a review of the comments provided.


Table 3. Distribution of completed questionnaires, by status of data problems


Case status

Number

Percent

Total completes

96

100

Cases with missing test scores

10

10

Cases needing followup

7

7

Cases with other unanswered questions

24

25

Cases needing followup

21

22

Cases with inconsistent data or unclear comments

28

29

Cases needing followup to clarify inconsistent data or obtain missing score*

21

22

NOTE: Percents are based on the total completes (96 cases). For cases that did not need follow-up, the problems were resolved through review of data/comments provided in the survey.


We used multiple strategies to maximize item response rates and the quality of the data collected. The first attempt was to obtain missing data through telephone calls to respondents. However, since many of these respondents were difficult to reach by telephone (because of gatekeepers such as secretaries), the next step was to use personalized email prompts. These prompts included the questions with missing or inconsistent data so that respondents could insert answers in their email responses to us. In cases where email prompts failed to elicit responses, interviewers followed up with a call and/or faxed version of the email script. Although more labor intensive, these multiple strategies were more effective in pushing item response rates and clarifying data when compared with a sole reliance on telephone calls. The approach was especially effective for respondents who were difficult to contact; in many of these cases, the interviewers had already established rapport with gatekeepers who would then be persuaded to forward the faxed material to respondents for completion.

2.2.4 Implications for Sampling Approach for Full-Scale Data Collection

The pilot test sample was small and not intended to allow for precise estimates of response rates or eligibility rates for the full-scale study. Instead, the pilot test was intended to explore the questionnaire and survey administration issues described in the previous sections of this report. However, information from the pilot test provides some indications that private for-profit institutions may have substantially lower response rates and higher ineligibility rates than other types of postsecondary institutions. Table 8 shows the number of respondents we can expect from the main survey based on the response rates and eligibility rates from the pilot test. It also includes the numbers that were included in the original OMB package. It should be noted that the calculations take the pilot test response and ineligibility rates literally; thus, the extrapolated results should be interpreted with caution.


The extrapolated estimates suggest that the number of respondents for two-year and four-year private for-profit institutions will be lower than expected, resulting in smaller cell sizes for data analyses. In addition, two-year private not-for-profit institutions tend to be too few in the universe and nationally representative samples to allow for analysis as a separate category. As in Postsecondary Education Quick Information Surveys (PEQIS) sample studies of the U.S. Department of Education, we will use one of two approaches to address the issue of small cell sizes. One option is to include counts for two-year private institutions (for example) in national estimates but not report estimates for these institutions as a separate category. Another option is to report the two-year private institutions as a single category if the combined cell size makes it worthwhile to use this approach. The option selected for data analysis will depend on response rates and eligibility rates for private institutions.


Table 4. Pilot sample extrapolated to full-scale survey


Level and type of control

Sample

Respondents

Non-respondents

Ineligible

Response rate

Ineligibility rate


Pilot results








2-year








Public

30

27

3

0

90.00%

0.00%


Private, not-for-profit

10

8

1

1

88.89

10.00


Private for-profit

10

2

4

4

33.33

40.00










4-year








Public

30

27

3

0

90.00

0.00


Private, not-for-profit

30

27

2

1

93.10

3.33


Private for-profit

10

5

3

2

62.50

20.00


Total

120

96

16

8

85.71

6.67

















Number of respondents per OMB package

Extrapolated to full-scale survey








2-year








Public

468

421

47

0

90.00%

0.00%

398

Private, not-for-profit

22

18

2

2

88.89

10.00

19

Private for-profit

82

16

33

33

33.33

40.00

70



 





 

4-year








Public

468

421

47

0

90.00

0.00

398

Private, not-for-profit

468

421

31

16

93.10

3.33

398

Private for-profit

160

80

48

32

62.50

20.00

136

Total

1,668

1,378

208

83

85.71

6.67

1,418

NOTE: Details do not add to totals because of rounding.



Table 5 shows the change in standard errors we can expect from the full-scale survey based on the response rates and eligibility rates from the pilot test. To simplify comparisons between this table and the table included in the original OMB submission, two columns have been added showing the percent increase in standard errors (if the sample size decreases) or the percent decrease in standard errors (if the sample size goes up). For most sectors, we expect either a modest increase in standard errors (i.e., modest loss in precision) or even a slight decrease in standard errors (e.g., slight improvement in precision for public institutions). For private for-profit institutions and private two-year institutions, however, the losses in precision are expected to be significant due to a combination of low eligibility rates and low response rates.


Table 5. Expected sample sizes and levels of precision using on eligibility rates and response rates from the Pilot Study (Revised OMB Table B-8)


Subgroup

 Expected sample size*

Standard error† of an estimated

proportion equal to ...

 Percent increase in SE††

 

Percent decrease in SE††

P = 0.20

P = .33

P = .50

 

 

 

 

 

 


Total

1,378

0.012

0.014

0.015

1.4%

---








Public

842

0.016

0.018

0.020

---

2.8%

4 year

421

0.022

0.026

0.028

---

2.8%

2 year

421

0.022

0.026

0.028

---

2.8%








Private, 4 year

502

0.020

0.024

0.025

3.2%

---

Not for profit

422

0.022

0.026

0.028

---

2.8%

For profit

80

0.051

0.060

0.064

30.4%

---








Private, 2 year

34

0.078

0.092

0.098

61.8%

---








4-year institutions

923

0.015

0.018

0.019

0.4%

---

Requires test scores

615

0.018

0.022

0.023

0.4%

---

Has open admissions

143

0.038

0.045

0.048

0.4%

---

Most/very selective**

186

0.023

0.027

0.028

0.4%

---

Moderately selective

389

0.023

0.027

0.029

0.4%

---

Minimally selective

179

0.034

0.040

0.043

0.4%

---

--- No change.

*Expected number of responding eligible institutions

Assumes design effect of 1.3. For subgroups consisting of institutions selected with certainty, the standard errors will be smaller than those shown.

** Standard errors include an approximate finite population correction to reflect the fact that these institutions will be selected at relatively high rates.

††Percent increase/decrease in the standard errors based on experience from the Pilot Study on versus those given in the original OMB writeup.



Recommendations and Decisions

3

This section includes recommendations and decisions regarding changes to the questionnaire and survey methodology. The changes are based on findings that relate to the four pilot test objectives outlined in the introduction and other findings from survey responses, interviews with selected survey respondents, and survey administration.



3.1 Recommendations for Changes to the Questionnaire

3.1.1 Clarifying the Correct Test and Score to Report

The recommendations and decisions discussed in this section are in response to findings related to the first two pilot test objectives below and other findings from survey responses and interviews with selected survey respondents. The pilot test objectives are:

  • Objective 1: Evaluate strategy of different questionnaires for two-year and four-year institutions

  • Objective 2: Evaluate survey instructions intended to address variable scoring systems


The pilot test revealed several types of respondent error related to reporting the correct test and score. For example, 12 respondents were found to have incorrectly reported tests or scores used to place students into low-level remedial courses. Other errors identified included reporting of scores used to place students in courses above entry level, reporting tests used to identify students in need of further placement testing, and reporting of scores “at or below which” students are placed into remediation, rather than the score “below which” remediation was needed. These findings suggest that additional clarification on the correct test and score should be provided to respondents.


To this end, the following changes were recommended and implemented:


  • The order of the bulleted instructions on questions 2 and 6 was reversed. The most prevalent respondent error identified in the pilot test was the reporting of tests and scores used to assess students’ need for low-level remedial courses (particularly in mathematics). Reversing the order of the bulleted instructions will make the instruction on reporting for multiple levels of remediation more prominent, thus reducing this reporting error.

  • An example score scale was added to the questionnaire to show the correct score to report. An example of score scale identifying the correct score to report will reduce a variety of test and score reporting errors, including misreporting of scores for academic programs requiring advanced skills and scores for low-level remedial courses. The example score scale shown below includes labels of different tests that respondents may use as part of their placement process, with the score requested for the survey clearly distinguished.


Example of a placement test score scale (0–100)




Score

Placement outcome


80 or above

Students are placed into college courses above entry level or into academic programs with advanced skills requirements (e.g., engineering, physics, and mathematics programs)


50 to 79

Students are placed into entry-level college courses

On questions 2 and 6,

report only the score

bShape2 elow which students

needed developmental

or remedial courses

50

Students scoring below this level are in need of remedial or developmental courses. Students scoring at or above this level are placed into entry-level college courses


40 to 49

Students are placed into the highest level of remedial or developmental courses


39 or below

Students are placed into lower levels of remedial or developmental courses


The score scale in the questionnaire is preceded by a note explaining that while institutions use mathematics and reading tests and scores to make a variety of placement decisions, only the score below which students are in need of remedial or developmental courses should be reported. It also points out that the scale is meant only as an example that does not represent an actual test and may not exactly reflect the institution’s policy. Notes referring respondents to the example score scale were added to questions 2 and 6 as well.



3.1.2 Changes to Test Lists

The recommendations and decisions discussed in this section are in response to findings related to the third pilot test objective: Assess completeness of test lists, as well as issues that emerged from survey responses and interviews with selected survey respondents. The following changes were recommended and implemented regarding the test lists:


  • The ACT Science test was removed from question 2. No institution reported using the ACT Science exam in the pilot test. This test was included because of the possibility that some institutions use the science portion of ACT to assess students’ mathematics ability, but the results of the pilot test suggest that this is very uncommon.

  • Tests designed to assess lower-level skills were retained on the questionnaire. Some respondents appeared to report tests and scores used to identify students for lower levels of remediation because they assumed that they should provide a score for all tests used in the institution’s placement process. While removing such low-level tests (i.e., Accuplacer Arithmetic) from the questionnaire would eliminate the risk of misreporting for some institutions, at least two pilot test institutions were found to legitimately use these low-level tests as a means to determine students’ readiness for entry-level courses. In order to encourage all institutions to participate, it is important to provide respondents with the option to report scores for low-level tests if they are used to assess student readiness for entry-level courses. Institutions that reported more than one subtest for a particular type of test will be contacted for data verification. In addition, respondents completing the web version of the survey will be prompted for data verification if they reported more than one subtest for a type of test.

  • Specialized mathematics tests were removed from question 2. Seven respondents reported mathematics tests used to place students in advanced mathematics courses above entry level. These were typically more “specialized” assessments such as the Compass Geometry or Trigonometry tests. This suggests that including these tests on the questionnaire will lead some respondents to misreport scores used to place students in courses above entry level, rather than those used to determine need for developmental or remedial courses.

  • Writing assessments were removed from question 6. Writing tests were infrequently reported in the pilot test. For example, the Compass Writing Skills placement domain, the most commonly reported writing test, was cited by only 4 percent of institutions overall. Several writing tests, including the ACT writing test and the Compass e-Write tests, were not reported by any institution. Moreover, at least two respondents were found to have incorrectly reported writing tests used to place students into remedial writing courses that did not include a focus on improving reading skills. This suggests that writing assessment scores will need to be verified with respondents, and Westat believes that this effort would be better spent validating mathematics and reading test scores, which are more aligned with NAGB’s goals for the study.


3.1.3 Use of Comment Boxes

Pilot test findings suggest that data validation and cleaning will represent a significant challenge for the full-scale survey. The effort involved in this process will be reduced if respondents provide detailed information about how they used reported tests and test scores in comment boxes. Indeed, information gleaned from comment boxes could help resolve a variety of data issues. For example, respondent comments could be used to identify reporting of tests used to identify students in need of further placement testing (e.g., an ACT score below which students are required to take the Compass test).


The following changes were recommended and implemented regarding the use of comment boxes:


  • Specify what information to include in comment boxes after questions 2 and 6. The comment boxes are currently preceded by a note to respondents asking for “any comments” about their response to the question. One respondent suggested in an interview that asking more directly for information about how tests are used would prompt respondents to provide more detailed information. Thus, the note was revised to read, “Please provide additional details about your response here. For example describe how students are placed based on the scores you provided.”

  • Prompt respondents completing the survey online to provide a comment. The web survey will include prompts to encourage respondents to describe their assessment procedures in the comment box. For example, respondents reporting more than one subtest will be prompted to provide comments in the comment box.



3.2 Recommendations for Survey Methodology

This section includes recommendations and decisions in response to findings related to the fourth pilot test objective as well as other issues related to approaches to maximize response rates and data quality. As described earlier, the biggest burdens in data collection for the full-scale study would be in ascertaining receipt of the survey package and identifying appropriate respondents; encouraging respondents to complete the survey; and conducting additional interviewer follow-up for data and clarification.


3.2.1 Identify appropriate survey respondent

The fourth pilot test objective is: Evaluate procedures to identify the appropriate survey respondent. Based on lessons learned from the pilot test, we will use the same multiple methods of faxing and emailing to send packages quickly in cases where interviewers are unable to confirm receipt of the package. In addition, we will implement a schedule that is similar to the pilot test schedule, in which the initial four-week period of follow-up will be dedicated to track the status of the survey package and identify appropriate respondents. We will also incorporate additional information from the pilot test into the interviewer manual and training (e.g., through role play) to ensure that interviewers can handle these tasks with ease and efficiency. In addition, we will add information to the cover letter and the Frequently Asked Questions page at the end of the survey to provide guidance on who the appropriate respondent might be.


In addition to the use of strategies that worked well in the pilot test, we recommend that the envelopes used for the survey packages be modified to include the names of presidents and the NAGB logo. Based on feedback from interviewers, these changes may help to draw attention to the survey package and possibly decrease the number of survey packages that get dumped. These personalized envelopes have been used in other surveys at Westat, and we will be able to obtain recent lists of institutions with names of presidents or chancellors from these studies.



3.2.2 Maximize survey response rates

Many institutions will have to be persuaded by interviewers to complete the survey. Some may need additional information before they agree to participate, including information about NAGB, the purpose of the survey, the use of the data, and the importance of the survey. As in the pilot test, we will ensure that interviewers are fully equipped with the information they need, including a document that provides some detailed information about NAGB that interviewers could fax or email to respondents. We will also use multiple methods and a schedule similar to the pilot test that allows for efficient use of telephone calls and mass emails to encourage respondent cooperation.



3.2.3 Maximize item response rate and data quality

Based on the types of data problems identified in the pilot test and the extent to which these problems may occur (even with the recommended changes described in the previous subsection), it is clear that editing the surveys and conducting follow-up for data retrieval and clarification will be a major undertaking for the full-scale data collection. For example, a significant and costly burden will be to obtain missing data for test scores and additional information about how the test scores are used, especially in cases where more than one subtest and score are reported for a single category of test (e.g., more than one ASSET subtest and test score). As discussed earlier, we expect about 64 percent of completed questionnaires to have problems that range from missing test scores and other unanswered questions to more complex problems regarding misreporting of test scores (table 3). Based on a review of the data, including comments provided in the comment boxes, about 7 percent of the cases will need data retrieval or clarification for missing test scores, 22 percent will need follow-up for other unanswered questions, and 22 percent will have complex data problems that will require careful review to determine the kinds of follow-up questions that need to be asked.


It should be noted that while suggested changes to the questionnaire, such as encouragement to use comment boxes, will help to clarify the data on completed surveys, it will also increase the burden of reviewing the comments. We will train dedicated staff to review comments to clarify data and prepare templates/forms that can be used for personalized email and fax follow-up for missing data and clarification of inconsistent data (similar to the scripts used in the pilot test).



3.2.4 Time Needed to Complete the Survey

Of the 96 responding institutions, 79 provided the number of minutes needed to complete the survey. The average number of minutes required was 20, with a minimum of 2 minutes and a maximum of 120 minutes. Sixty-six respondents (or 84 percent) reported 30 minutes or less to complete the survey. Of the 13 respondents that reported more than 30 minutes, 8 reported 31 to 45 minutes, 4 reported 46 to 60 minutes, and 1 reported 120 minutes (including time taken to consult with several individuals). Westat recommends listing a burden estimate of 30 minutes on the questionnaire to account for the inclusion of additional encouragement for respondents to provide comments in the comment boxes.




3.2.5 Implications for Full-Scale Data Collection Level of Effort and Costs

Westat believes that it is feasible to obtain a response rate of 85 percent or higher for the full-scale data collection. However, based on the experiences of the pilot test, the level of effort needed to obtain the desired response rate will be higher than assumed in the original budget for the full-scale data collection.


The decisions on changes to the questionnaire and approaches for maximizing response rates and the quality of the data collected have implications for an increased level of effort in the following areas of data collection and processing:


  • Ascertain receipt of the survey package, identify appropriate respondents, and contact respondents;

  • Encourage respondents to complete the survey, obtain missing data, and clarify inconsistent data; and

  • Process completed surveys and prepare materials for additional interviewer followup to clarify inconsistent data.


The increased level of effort in the full-scale data collection will be supported with available funds previously awarded under the current project contract. Thus, no additional funds will be required.













Appendix A

Summary of Commonly Reported Tests and Mean Scores


Appendix A


Summary of Commonly Reported Tests and Mean Scores


This appendix summarizes commonly reported tests and mean scores for those tests.1 It is important to note that these data are based on a very small sample of institutions and are presented here only to provide a basic sense of what kinds of responses might be expected in a full scale survey administration, including potential anomalous results. The percent distributions should not be interpreted as prevalence estimates because of the small number of institutions and the fact that the figures have not been weighted to represent the population. Similarly, mean scores should not be viewed as representative of a larger population, and mean scores calculated based on a small number of cases should be cautiously interpreted (see table notes for details). The data in this section and in Appendix B are not appropriate for public release and should not be used for any purpose other than an initial look at potential anomalous results.


Mathematics Tests

Overall, 79 percent of survey respondents reported using any test to assess entering students’ need for developmental or remedial mathematics courses (Table A-1). Table A-1 also displays the five most commonly reported off-the-shelf mathematics tests reported by respondents, broken out by institutional characteristics.2 Overall, ACT and SAT Mathematics tests were the most commonly reported tests (39 and 29 percent, respectively), followed by the Accuplacer Elementary Algebra test, the Compass Algebra test, and the Accuplacer College-Level Mathematics test. The ACT Mathematics test was the most frequently reported test among four-year institutions (41 percent), while equal proportions of two-year institutions cited the ACT Mathematics, Accuplacer Elementary Algebra, and Compass Elementary Algebra tests (35 percent).


Table A-1. Five most commonly reported mathematics tests, by institution characteristics


Institution characteristic

Number of responding institutions in sample

Percent of institutions using test

Used any mathematics test

ACT Mathematics

SAT Mathematics

Accuplacer Elementary Algebra

Compass Algebra domain

Accuplacer College-Level Math









   All institutions

96

79

39

29

19

19

10









Institution level








2-year

37

89

35

24

35

35

19

4-year

59

73

41

32

8

8

5









Institution sector








Public 2-year

27

100

44

26

37

44

22

Private not-for-profit 2-year

8

63

13

25

25

13

13

Private for-profit 2-year

2

50

#

#

50

#

#

Public 4-year

27

89

48

33

15

19

7

Private not-for-profit 4-year

27

59

37

33

#

#

4

Private for-profit 4-year

5

60

20

20

20

#

#









Institution selectivity








2-year institutions

37

89

35

24

35

35

19

4-year institutions








Most selective

1

100

100

100

#

#

#

Very selective

14

57

21

29

7

7

7

Moderately selective

29

90

55

38

7

7

3

Minimally selective

6

67

33

33

17

17

#

Open admissions

9

44

22

11

11

11

11

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

# Rounds to zero.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.


Not shown in Table A-1 are responses for ASSET mathematics tests and frequency of “other” mathematics tests written in by respondents. ASSET tests were reported in very low frequency, with the Elementary Algebra test being the most commonly reported at only 4 percent (Appendix B, Table B-1). Overall, 23 percent of respondents used the write-in field to report a test not appearing on the list. As discussed in the next section, the majority of these were institutionally developed tests.


Table A-2 shows the mean scores for each of the five most commonly reported mathematics tests, broken out by institution characteristics. Overall, most means fall at or slightly above the midpoint on the test’s scale (shown in parentheses). The relatively lower mean score for the Accuplacer College-Level Mathematics test is an exception, although not necessarily unexpected given than that this is the highest level Accuplacer mathematics test. Two-year institutions reported slightly higher scores on average than four-year institutions for all but one of the five tests, but it is worth noting that two estimates for four-year institution means are based on a very small number of cases and should be interpreted with caution.

Table A-2. Mean scores of five most commonly reported mathematics tests, by institution characteristics


Institution characteristic

ACT Mathematics
(1–36)

SAT Mathematics (200–800)

Accuplacer Elementary Algebra
(20–120)

Compass Algebra domain (1–99)

Accuplacer College-Level Math
(20–120)







   All institutions

20

499

74

54

54







Institution level






2-year

21

503

77

55

48

4-year

20

497

64!

51

74!







Institution sector






Public 2-year

21

504

78

52

49

Private not-for-profit 2-year

NA

500!

79!

NA

NA

Private for-profit 2-year

NA

NA

NA

NA

NA

Public 4-year

20

470

62!

51

NA

Private not-for-profit 4-year

20

524

NA

NA

NA

Private for-profit 4-year

NA

NA

70!

NA

NA







Institution selectivity






2-year institutions

21

503

77

55

48

4-year institutions






Most selective

NA

NA

NA

NA

NA

Very selective

22!

533!

NA

NA

NA

Moderately selective

19

491

58!

50!

NA

Minimally selective

22!

515!

NA

NA

NA

Open-admissions

21!

NA

NA

NA

NA

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

!Interpret with caution; mean score is based on 2 to 4 cases.

NA = not applicable; too few cases to report.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.



Reading Tests

Table A-3 displays the five most commonly reported reading tests, broken out by institution type. Overall, 54 percent of respondents reported using any test to evaluate entering students’ need for developmental or remedial reading courses. Two-year institutions were more likely to report using a reading test compared to four-year institutions (84 versus 36 percent). The Accuplacer Reading Comprehension test was the most commonly reported reading overall (26 percent), followed by the Compass Reading domain (22 percent), ACT Reading (21 percent), SAT Critical Reading (18 percent), and ASSET Reading Skills (9 percent).


Two-year institutions were most likely to report using the Accuplacer Reading Comprehension exam (49 percent), while equal proportions of four-year institutions reported using Accuplacer Reading Comprehension, ACT Reading, and SAT Critical Reading (12 percent each).


Six percent of institutions overall reported using the Nelson-Denny reading test, and 5 percent reported a test using the optional write-in fields (not shown in the table). As with mathematics tests, the majority of tests reported in the write-in fields were institutionally developed assessments.


Table A-3. Five most commonly reported reading tests, by institution characteristics


Institution characteristic

Number of responding institutions in sample

Percent of institutions using test

Used any reading test

Accuplacer Reading Comprehension

Compass Reading domain

ACT Reading

SAT Critical Reading

ASSET Reading Skills









   All institutions

96

54

26

22

21

18

9









Institution level








2-year

37

84

49

41

35

27

24

4-year

59

36

12

10

12

12

#









Institution sector








Public 2-year

27

89

44

52

44

33

26

Private not-for-profit 2-year

8

63

50

13

13

13

25

Private for-profit 2-year

2

100

100

#

#

#

#

Public 4-year

27

41

11

19

19

15

#

Private not-for-profit 4-year

27

33

11

4

7

11

#

Private for-profit 4-year

5

20

20

#

#

#

#









Institution selectivity








2-year institutions

37

84

49

41

35

27

24

4-year institutions








Most selective

1

100

#

#

#

#

#

Very selective

14

#

#

#

#

#

#

Moderately selective

29

52

17

10

21

17

#

Minimally selective

6

50

17

17

#

17

#

Open admissions

9

22

11

22

11

11

#

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

# Rounds to zero.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.


Table A-4 shows the mean scores for each of the five most commonly reported reading tests, broken out by institution type. As with mathematics tests, most overall mean scores fall roughly at the midpoint of each test’s score range (shown in parentheses next to the test name). The overall mean score of 78 reported for the Compass Reading placement domain is an exception, falling relatively high on the test’s score scale of 1-99. Two-year and four-year institutions reported fairly similar scores on average for most tests, with the exception of the SAT Critical Reading test (503 for two-year institutions and 423 for four-year institutions). This difference was partly attributable to a score of 350 reported by one four-year institution.


Table A-4. Mean scores of five most commonly reported reading tests, by institution characteristics


Institution characteristic

Accuplacer Reading Comprehension (20–120)

Compass Reading domain (1–99)

ACT Reading

(1–36)

SAT Critical Reading
(200–800)

ASSET Reading Skills
(23–55)







   All institutions

76

78

20

470

38







Institution level






2-year

76

78

20

503

38

4-year

79

78

22

423

NA







Institution sector






Public 2-year

79

78

20

503

38

Private not-for-profit 2-year

67!

NA

NA

NA

38!

Private for-profit 2-year

74!

NA

NA

NA

NA

Public 4-year

76!

79

18

420!

NA

Private not-for-profit 4-year

81!

NA

32!

427!

NA

Private for-profit 4-year

NA

NA

NA

NA

NA







Institution selectivity






2-year institutions

76

78

20

503

38

4-year institutions






Most selective

NA

NA

NA

NA

NA

Very selective

NA

NA

NA

NA

NA

Moderately selective

78

76

23

438

NA

Minimally selective

NA

NA

NA

NA

NA

Open-admissions

NA

78!

NA

NA

NA

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

!Interpret with caution; mean score is based on 2 to 4 cases.

NA = not applicable; too few cases to report.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.












Appendix B

Detailed Tables of Estimates for Survey Data




Table B-1. Number of responding 2-year and 4-year postsecondary institutions in the sample and percent reporting the use of various tests to evaluate whether entering students need developmental or remedial courses in mathematics, by institution characteristics: Fall 2009



Institution characteristic

Number of responding institutions in sample

Percent of institutions using math tests


Any test

ACT

SAT

Accuplacer

Math

Science

Composite

score

Math

Total score including

writing

Total score excluding writing

Arithmetic

Elementary

Algebra

College-level Math














   All institutions

96

79

39

#

2

29

1

2

6

19

10















Institution level













2-year

37

89

35

#

3

24

3

3

8

35

19


4-year

59

73

41

#

2

32

#

2

5

8

5















Institution sector













Public 2-year

27

100

44

#

4

26

4

4

7

37

22


Private not-for-profit 2-year

8

63

13

#

#

25

#

#

13

25

13


Private for-profit 2-year

2

50

#

#

#

#

#

#

#

50

#


Public 4-year

27

89

48

#

#

33

#

#

11

15

7


Private not-for-profit 4-year

27

59

37

#

4

33

#

4

#

#

4


Private for-profit 4-year

5

60

20

#

#

20

#

#

#

20

#















Institution selectivity













2-year institutions

37

89

35

#

3

24

3

3

8

35

19


4-year institutions













Most selective

1

100

100

#

#

100

#

#

#

#

#


Very selective

14

57

21

#

#

29

#

#

7

7

7


Moderately selective

29

90

55

#

3

38

#

3

3

7

3


Minimally selective

6

67

33

#

#

33

#

#

#

17

#


Open admissions

9

44

22

#

#

11

#

#

11

11

11


See notes at end of table.



Table B-1. Number of responding 2-year and 4-year postsecondary institutions in the sample and percent reporting the use of various tests to evaluate whether entering students need developmental or remedial courses in mathematics, by institution characteristics: Fall 2009—Continued



Institution characteristic

Number of responding institutions in sample

Percent of institutions using math tests

Asset

Compass

Other math
placement test

Numerical

skills

Elementary Algebra

Intermediate Algebra

College Algebra

Geometry

Pre-Algebra

Algebra

College Algebra

Geometry

Trigono-metry














    All institutions

96

3

4

2

3

#

6

19

7

#

1

23














Institution level













2-year

37

8

11

5

8

#

11

35

11

#

#

14

4-year

59

#

#

#

#

#

3

8

5

#

2

29














Institution sector













Public 2-year

27

7

15

7

7

#

15

44

15

#

#

11

Private not-for-profit 2-year

8

13

#

#

13

#

#

13

#

#

#

25

Private for-profit 2-year

2

#

#

#

#

#

#

#

#

#

#

#

Public 4-year

27

#

#

#

#

#

4

19

7

#

4

33

Private not-for-profit 4-year

27

#

#

#

#

#

4

#

4

#

#

26

Private for-profit 4-year

5

#

#

#

#

#

#

#

#

#

#

20














Institution selectivity













2-year institutions

37

8

11

5

8

#

11

35

11

#

#

14

4-year institutions













Most selective

1

#

#

#

#

#

#

#

#

#

#

#

Very selective

14

#

#

#

#

#

#

7

#

#

#

29

Moderately selective

29

#

#

#

#

#

3

7

7

#

#

38

Minimally selective

6

#

#

#

#

#

#

17

#

#

#

17

Open admissions

9

#

#

#

#

#

11

11

11

#

11

11

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

# Zero percent or rounds to zero.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.



Table B-2. Mean test scores reported by 2-year and 4-year postsecondary institutions for the various tests used to evaluate whether entering students need developmental or remedial courses in mathematics, by institution characteristics: Fall 2009



Institution characteristic

Mean scores for math tests

ACT

SAT

Accuplacer

Math

Science

Composite

score

Math

Total score including

writing

Total score excluding writing

Arithmetic

Elementary

Algebra

College-Level Math











    All institutions

20

NA

21!

499

NA

760!

59

74

54











Institution level










2-year

21

NA

NA

503

NA

NA

55!

77

48

4-year

20

NA

NA

497

NA

NA

64!

64!

74!











Institution sector










Public 2-year

21

NA

NA

504

NA

NA

56!

78

49

Private not-for-profit 2-year

NA

NA

NA

500!

NA

NA

NA

79!

NA

Private for-profit 2-year

NA

NA

NA

NA

NA

NA

NA

NA

NA

Public 4-year

20

NA

NA

470

NA

NA

64!

62!

NA

Private not-for-profit 4-year

20

NA

NA

524

NA

NA

NA

NA

NA

Private for-profit 4-year

NA

NA

NA

NA

NA

NA

NA

70!

NA











Institution selectivity










2-year institutions

21

NA

NA

503

NA

NA

55!

77

48

4-year institutions










Most selective

NA

NA

NA

NA

NA

NA

NA

NA

NA

Very selective

22!

NA

NA

533!

NA

NA

NA

NA

NA

Moderately selective

19

NA

NA

491

NA

NA

NA

58!

NA

Minimally selective

22!

NA

NA

515!

NA

NA

NA

NA

NA

Open admissions

21!

NA

NA

NA

NA

NA

NA

NA

NA

See notes at end of table.



Table B-2. Mean test scores reported by 2-year and 4-year postsecondary institutions for the various tests used to evaluate whether entering students need developmental or remedial courses in mathematics, by institution characteristics: Fall 2009—Continued



Institution characteristic

Mean scores for math tests


Asset

Compass



Numerical

skills

Elementary Algebra

Intermediate Algebra

College Algebra

Geometry

Pre-Algebra

Algebra

College Algebra

Geometry

Trigonometry














    All institutions

45!

45!

42!

46!

NA

56

54

61

NA

NA












Institution level











2-year

45!

45!

42!

46!

NA

56!

55

65!

NA

NA

4-year

NA

NA

NA

NA

NA

55!

51

55!

NA

NA












Institution sector











Public 2-year

51!

45!

42!

44!

NA

56!

52

65!

NA

NA

Private not-for-profit 2-year

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Private for-profit 2-year

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Public 4-year

NA

NA

NA

NA

NA

NA

51

63!

NA

NA

Private not-for-profit 4-year

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Private for-profit 4-year

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA












Institution selectivity











2-year institutions

45!

25!

42!

46!

NA

56!

55

65!

NA

NA

4-year institutions











Most selective

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Very selective

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Moderately selective

NA

NA

NA

NA

NA

NA

50!

48!

NA

NA

Minimally selective

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Open admissions

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

!Interpret with caution; Mean score is based on 2 to 4 cases.

NA = not applicable; too few cases to report.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.





Table B-3. Number of responding 2-year and 4-year postsecondary institutions and percent reporting the use of other criteria to evaluate whether entering students need developmental or remedial courses in mathematics, by institution characteristics: Fall 2009



Institution characteristic

Number of responding institutions in sample

Percent of institutions reporting other criteria for math evaluation

Any other criteria

High school graduation tests or end-of-course tests

High school
grades (including GPA)

Highest school

math course completed

Advanced Placement or International Baccalaureate scores

Faculty recommendation

Other










    All institutions

96

38

3

11

13

23

4

6










Institution level









2-year

37

41

3

14

8

27

8

8

4-year

59

36

3

10

15

20

2

5










Institution sector









Public 2-year

27

41

4

7

7

26

4

11

Private not-for-profit 2-year

8

50

#

38

13

38

25

#

Private for-profit 2-year

2

#

#

#

#

#

#

#

Public 4-year

27

37

7

7

19

22

4

4

Private not-for-profit 4-year

27

37

#

15

15

19

#

7

Private for-profit 4-year

5

20

#

#

#

20

#

#










Institution selectivity









2-year institutions

37

41

3

14

8

27

8

8

4-year institutions









Most selective

1

100

#

#

100

#

#

#

Very selective

14

36

7

#

21

21

#

14

Moderately selective

29

38

#

17

14

21

#

3

Minimally selective

6

17

#

#

#

17

#

#

Open admissions

9

33

11

11

11

22

11

#

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

# Zero percent or rounds to zero.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.





Table B-4. Number of responding 2-year and 4-year postsecondary institutions and percent reporting the use of various tests to evaluate whether entering students need developmental or remedial courses in reading, by institution characteristics: Fall 2009



Institution characteristic

Number of responding institutions in sample

Percent of institutions using reading tests

Any test

ACT

SAT

Reading

English

Writing

Composite score

Critical Reading

Writing

Total score including Writing

Total score excluding Writing












    All institutions

96

54

21

7

#

3

18

1

1

5












Institution level











2-year

37

84

35

11

#

8

27

3

3

5

4-year

59

36

12

5

#

#

12

#

#

5












Institution sector











Public 2-year

27

89

44

15

#

11

33

4

#

7

Private not-for-profit 2-year

8

63

13

#

#

#

13

#

13

#

Private for-profit 2-year

2

100

#

#

#

#

#

#

#

#

Public 4-year

27

41

19

7

#

#

15

#

#

7

Private not-for-profit 4-year

27

33

7

4

#

#

11

#

#

4

Private for-profit 4-year

5

20

#

#

#

#

#

#

#

#












Institution selectivity











2-year institutions

37

84

35

11

#

8

27

3

3

5

4-year institutions











Most selective

1

100

#

#

#

#

#

#

#

#

Very selective

14

#

#

#

#

#

#

#

#

#

Moderately selective

29

52

21

7

#

#

17

#

#

7

Minimally selective

6

50

#

#

#

#

17

#

#

#

Open admissions

9

22

11

11

#

#

11

#

#

11

See notes at end of table.



Table B-4. Number of responding 2-year and 4-year postsecondary institutions and percent reporting the use of various tests to evaluate whether entering students need developmental or remedial courses in reading, by institution characteristics: Fall 2009—Continued



Institution characteristic

Number of responding institutions in sample

Percent of institutions using reading tests

Accuplacer

Asset

Compass

Nelson-Denny Reading

Other reading placement test

Reading Compre-hension

Sentence Skills

WritePlacer

Reading Skills

Writing Skills

Reading

Writing Skills

Writing

e-Write

(2-8)

Writing

e-Write

(2-12)














    All institutions

96

26

6

3

9

2

22

4

#

#

6

5














Institution level













2-year

37

49

11

3

24

5

41

5

#

#

11

8

4-year

59

12

3

3

#

#

10

3

#

#

3

3














Institution sector













Public 2-year

27

44

#

#

26

4

52

7

#

#

15

11

Private not-for-profit 2-year

8

50

38

#

25

13

13

#

#

#

#

#

Private for-profit 2-year

2

100

50

50

#

#

#

#

#

#

#

#

Public 4-year

27

11

4

#

#

#

19

4

#

#

#

7

Private not-for-profit 4-year

27

11

4

4

#

#

4

4

#

#

7

#

Private for-profit 4-year

5

20

#

20

#

#

#

#

#

#

#

#














Institution selectivity













2-year institutions

37

49

11

3

24

5

41

5

#

#

11

8

4-year institutions













Most selective

1

#

#

#

#

#

#

#

#

#

100

#

Very selective

14

#

#

#

#

#

#

#

#

#

#

#

Moderately selective

29

17

3

3

#

#

10

7

#

#

3

3

Minimally selective

6

17

#

17

#

#

17

#

#

#

#

#

Open admissions

9

11

11

#

#

#

22

#

#

#

#

11

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

# Zero percent or rounds to zero.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.



Table B-5. Mean test scores reported by 2-year and 4-year postsecondary institutions for the various tests used to evaluate whether entering students need developmental or remedial courses in reading, by institution characteristics: Fall 2009



Institution characteristic

Mean scores for reading tests

ACT

SAT

Reading

English

Writing

Composite score

Critical Reading

Writing

Total score including Writing

Total score excluding Writing











    All institutions

20

18

NA

22!

470

NA

NA

846












Institution level










2-year

20

19!

NA

22!

503

NA

NA

1,070!


4-year

22

17!

NA

NA

423

NA

NA

697!












Institution sector










Public 2-year

20

19!

NA

22!

503

NA

NA

1,070!


Private not-for-profit 2-year

NA

NA

NA

NA

NA

NA

NA

NA


Private for-profit 2-year

NA

NA

NA

NA

NA

NA

NA

NA


Public 4-year

18

17!

NA

NA

420!

NA

NA

675!


Private not-for-profit 4-year

32!

NA

NA

NA

427!

NA

NA

NA


Private for-profit 4-year

NA

NA

NA

NA

NA

NA

NA

NA












Institution selectivity










2-year institutions

20

19!

NA

22!

503

NA

NA

1,070!


4-year institutions










Most selective

NA

NA

NA

NA

NA

NA

NA

NA


Very selective

NA

NA

NA

NA

NA

NA

NA

NA


Moderately selective

23

17!

NA

NA

438

NA

NA

821!


Minimally selective

NA

NA

NA

NA

NA

NA

NA

NA


Open admissions

NA

NA

NA

NA

NA

NA

NA

NA


See notes at end of table.



Table B-5. Mean test scores reported by 2-year and 4-year postsecondary institutions for the various tests used to evaluate whether entering students need developmental or remedial courses in reading, by institution characteristics: Fall 2009—Continued



Institution characteristic

Mean scores for reading tests

Accuplacer

Asset

Compass

Nelson-
Denny Reading

Reading Compre-hension

Sentence Skills

Write
Placer

Reading
Skills

Writing Skills

Reading

Writing
Skills

Writing

e-Write

(2-8)

Writing

e-Write

(2-12)












    All institutions

76

74

5!

38

39!

78

60!

NA

NA

132!












Institution level











2-year

76

81!

NA

38

39!

78

NA

NA

NA

132!

4-year

79

60!

5!

NA

NA

78

52!

NA

NA

+












Institution sector











Public 2-year

79

NA

NA

38

NA

78

76

NA

NA

132!

Private not-for-profit 2-year

67!

81!

NA

38!

NA

NA

NA

NA

NA

NA

Private for-profit 2-year

74!

NA

NA

NA

NA

NA

NA

NA

NA

NA

Public 4-year

76!

NA

NA

NA

NA

79

NA

NA

NA

NA

Private not-for-profit 4-year

81!

NA

NA

NA

NA

NA

NA

NA

NA

NA

Private for-profit 4-year

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA












Institution selectivity











2-year institutions

76

81!

NA

38

39!

78

NA

NA

NA

132!

4-year institutions











Most selective

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Very selective

NA

NA

NA

NA

NA

NA

NA

NA

NA

NA

Moderately selective

78

NA

5!

NA

NA

76

52!

NA

NA

NA

Minimally selective

NA

NA

4!

NA

NA

NA

NA

NA

NA

NA

Open admissions

NA

NA

NA

NA

NA

78!

NA

NA

NA

NA

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

!Interpret with caution; Mean score is based on 2 to 4 cases.

NA = not applicable; too few scores to report.

+ Test was reported by three 4-year institutions but no test scores were provided.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.



Table B-6. Number of responding 2-year and 4-year postsecondary institutions and percent reporting the use of other criteria to evaluate whether entering students need developmental or remedial courses in reading, by institution characteristics: Fall 2009



Institution characteristic

Number of responding institutions in sample

Percent of institutions reporting other criteria for reading evaluation


Any other

criteria

High school graduation tests
or end-of-
course tests

High school
grades (including GPA)

Highest school reading course completed

Advanced Placement or International Baccalaureate scores

Faculty recommendation

Other











    All institutions

96

19

4

7

3

10

3

3











Institution level









2-year

37

38

8

14

8

24

5

8

4-year

59

7

2

3

#

2

2

#










Institution sector









Public 2-year

27

37

7

7

4

22

4

7

Private not-for-profit 2-year

8

50

13

38

25

38

13

13

Private for-profit 2-year

2

#

#

#

#

#

#

#

Public 4-year

27

11

4

4

#

4

4

#

Private not-for-profit 4-year

27

4

#

4

#

#

#

#

Private for-profit 4-year

5

#

#

#

#

#

#

#










Institution selectivity









2-year institutions

37

38

8

14

8

24

5

8

4-year institutions









Most selective

1

#

#

#

#

#

#

#

Very selective

14

#

#

#

#

#

#

#

Moderately selective

29

7

#

7

#

#

#

#

Minimally selective

6

#

#

#

#

#

#

#

Open admissions

9

22

11

#

#

11

11

#

Note: figures in this table are based on a small number of institutions and are not representative of the broader population. These data should not be used for any purpose other than assessing the results of the pilot test

# Rounds to zero.

SOURCE: Pilot test, Survey on Evaluating Student Need for Developmental or Remedial Courses at Postsecondary Education Institutions, fall 2010.


1 The questionnaire included two questions intended to capture “other” means of reading and mathematics assessment besides tests (e.g., high school grades). Tables of reporting frequencies for these items can be found in Appendix B.

2 See Appendix B for tables displaying frequencies and means for all tests.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorLiam Ristow
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy