Request for OMB Clearance
Supplemental Information on Pilot Test
Evaluating Student Need for Developmental or Remedial Courses
at Postsecondary Education Institutions
(Formerly titled Survey of Placement Tests and Cut-Scores in Higher Education Institutions)
August 2010
Overview of pilot test
1.1 Rationale for pilot test
A variety of potential challenges for a full-scale administration of the National Assessment Governing Board (NAGB) survey were revealed in the process of survey development. These challenges included issues related to questionnaire content as well as potential hurdles for survey administration. Some of these issues persisted as possible threats to a full data collection even after multiple revisions were made to the survey questions and instructions. Due to the extensive and highly variable nature of the problems encountered during survey development, it is prudent to conduct a pilot test to explore whether the changes made to the survey instruments have reduced/eliminated the challenges or whether these potential challenges still exist. The pilot test will be based on a random sample of 120 institutions selected with the same criteria as proposed for the full-scale survey and using the same survey instruments and administration procedures as would be used in a full-scale survey administration.
The current version of the questionnaire reflects lessons learned from a survey development process that captured feedback from a wide range of individuals, including potential survey respondents, survey design methodologists, and experts on higher education institutions. Although the questionnaire was improved significantly as a result of these activities, several key issues related to survey content will be addressed in the pilot test. Survey development work also raised important questions about survey administration methodology, including procedures used to identify the appropriate survey respondent. The pilot test will explore these issues with a diverse and sufficiently large group of institutions that are similar to the types of institutions that will be asked to participate in the full-scale study. This will allow for additional refinement of the questionnaire and data collection procedures, improving the quality of data captured in the full-scale survey administration.
1.2 Pilot test objectives
The objectives of the pilot test center primarily on examining survey content and survey administration issues identified during survey development with a relatively large and diverse sample of institutions. Assessment of the issues identified during survey development will be the key focus of the pilot test. The varied landscape of testing policies and procedures found during survey development suggests that the pilot test will be a valuable opportunity to more fully understand how higher education institutions evaluate student need for developmental or remedial courses prior to the full-scale survey administration. Building a strong understanding of these policies and procedures will be instrumental in developing and deploying survey instruments that will yield high quality data in the full-scale study. Descriptions of pilot test objectives are provided below, along with discussion of how each objective will be achieved through analysis of pilot test data. Descriptions of each source of pilot test data are provided in section 1.3.
Objective 1: Evaluate strategy of different questionnaires for two-year and four-year institutions
Several participants in the survey development process reported that their institutions use different criteria to evaluate students’ need for developmental courses depending on the program of enrollment. Furthermore, it appeared that this process could differ depending on institution type. For example, a four-year institution may use one set of scores to evaluate students enrolled in an engineering program and another set of scores for students enrolled in a history program. At the two-year level, some institutions use one set of scores for students enrolled in a program designed to transfer to a four-year institution and another set for students enrolled in career-oriented technical programs. These findings necessitated the development of different questionnaires for two-year institutions and four-year institutions containing different sets of reporting instructions. Two-year institutions are instructed to report based on tests used for students enrolled in a program designed to transfer to a four-year institution, and four-year institutions are instructed to report based on tests used for students enrolled in a liberal arts or sciences program.
A pilot test with a large and varied sample of two-year and four-year institutions would help validate the approach of dual questionnaires and the different sets of instructions. Westat will use several data sources in evaluating the dual questionnaire strategy.1 First, we will review logs of respondent questions to the Helpdesk for problems related to the reporting of tests used for the indicated type of academic program. Second, we will examine survey comment field responses for questions 2 and 6 for problems related to the reporting of tests for the indicated type of academic program. Third, follow-up interviews with selected survey respondents will include discussion of the instruction to report tests used for a particular type of academic program. Westat will analyze interview responses to determine whether the approach of dual questionnaires with different sets of instructions for two-year and four-year institutions is effective.
Objective 2: Evaluate survey instructions intended to address variable scoring systems
Findings from survey development suggested that use of test scores in evaluating students’ need for developmental courses varied considerably from institution to institution. For example, while some institutions take a fairly straightforward approach with a single score below which a student is deemed in need of remediation, other institutions use variable scoring systems in which the cutoff score can vary depending on other factors. For example, some institutions reported using one score to recommend student enrollment in remediation and another score to require student enrollment in remediation. Since the Governing Board is interested in identifying single score points below which students are in need of remediation, several modifications were made to the questionnaire in response to these findings. For example, bulleted notes were added to questions 2 and 6 that instruct respondents to report the highest score used if a variable score system is used.
The pilot test will explore whether the current questionnaire instructions adequately direct institutions to report the highest score or whether additional instructions are needed. Westat will use several data sources in making this determination. As with Objective 1 above, Westat will review logs of contacts with respondents and survey comment fields for issues related to institutions’ use of variable scoring systems. For example, communication logs and comment field data will be examined for instances where respondents reported confusion about how to report if more than one score is used for a particular test. The use of variable scoring systems will also be addressed in follow-up interviews with a sample of respondents. Interviews will focus on respondents’ interpretation of the instructions on reporting the highest score and investigate whether their interpretation is accurate and whether there are other types of variable score policies that should be addressed through survey instructions.
Objective 3: Assess completeness of test lists
The current questionnaire allows respondents to report scores for a range of tests, including subtests of the SAT and ACT admissions tests, placement tests such as ACCUPLACER and COMPASS, as well as write-in fields for respondents to report tests developed internally by the institution or within a state system or other consortium of institutions. These lists were developed through consultation with institutional representatives and content experts, and the pilot test will be an important source of additional information on the adequacy and usefulness of these lists for respondents. Pilot test data for questions 2 and 6 will be analyzed to determine if the lists should be restructured, reduced in scope, or expanded. For example, data from write-in fields will be reviewed to determine if additional tests should be added. Selected respondents that report tests not included on the list will be asked to participate in follow-up interviews to obtain more information on the tests and how they are used.
Objective 4: Evaluate procedures to identify the appropriate survey respondent
Survey development revealed potential hurdles related to identifying the correct survey respondent. For example, communication with institutional representatives during survey development suggested that the appropriate individual to complete the survey could vary significantly from institution to institution. Potential survey respondents could be located in offices of student services, admissions offices, offices of academic deans, or within academic departments. In some cases, multiple appropriate respondents could exist within an institution, as in cases where reading and mathematics testing is handled separately by different academic departments. The pilot test will use a strategy of sending survey materials to the office of the institution’s president or chancellor, followed by a phone call from a Westat interviewer to the president or chancellor’s office to confirm receipt of the survey materials and (1) identify where the materials were sent for completion or (2) provide assistance in identifying the appropriate respondent as needed. Interviewers will summarize the process of identifying the appropriate respondent for each institution using a standardized form (described in more detail in section 1.3 below).
To assess the proposed method of identifying the appropriate survey respondent, project staff will review all interviewer logs describing the respondent identification process. This review will focus on common problems encountered and solutions that could be used to make the process of identifying appropriate survey respondents more efficient in the full-scale survey administration. For example, the review of interviewer logs may result in development of additional instructions on forwarding the materials to the appropriate respondent or how to handle situations in which multiple individuals would be the appropriate survey respondent. Interviewer logs will be examined by institution type (e.g., two-year and four-year institutions) to determine if instructions or other procedures for identifying the appropriate survey respondent should be tailored to the type of institution.
1.3 Pilot test data sources
Westat will use a number of different data sources and monitoring tools to capture information on the pilot test. As indicated in the previous section, most pilot test data sources will be employed in addressing multiple study objectives. This section provides additional information on the sources of pilot test data.
Respondent communication logs
Communication between respondents and Westat will be captured using standardized communication logs. Two types of communication logs will be used. Westat interviewers will use the Interviewer Problem Sheet contained in the interviewer training manual to record problems and questions reported by respondents (Appendix A contains the entire training manual, and the Interviewer Problem Sheet can be found in Attachment 9 of the manual). The second log will be used to record questions or issues reported to the survey help desk, staffed by the Westat survey manager (see survey manager contact log in Appendix B).
Used for Objectives 1 and 2
Interviewer logs of the respondent identification process
As described in Objective 4 in the previous section, a key focus of the pilot test will be the steps needed to identify the appropriate survey respondent. Westat interviewers will work with president’s office contacts on a case-by-case basis to identify a survey respondent and troubleshoot difficult cases, including institutions for which more than one individual could serve as a survey respondent. (Section 3.3 of the interviewer training manual, provided in Appendix A, provides detailed instructions to assist interviewers in identifying the appropriate person to complete the survey.)
Interviewers will document the process of identifying the best survey respondent for each pilot test institution on Attachment 10 of the interviewer training manual (see Appendix A). We will instruct interviewers to provide a complete description of the respondent identification process, including whether the contact in the president’s office was able to quickly identify an appropriate respondent and whether any coordination among multiple respondents was required.
Used for Objective 4
Survey data and comment field responses
We will analyze data collected through the survey website to address several pilot test objectives, including responses to individual survey items (e.g., tests and scores reported) as well as responses provided in comment fields. As discussed in the previous section, survey data will be analyzed to assess the appropriateness of the lists of tests and to identify respondent errors, such as misinterpretation of instructions or reporting of out of range values. The comment fields will be an especially useful resource in assessing pilot test results and implications for the full-scale study. For example, respondents may use the comment fields to describe evaluation policies that do not allow for the reporting of a single test score as requested on the survey.
Used for Objectives 1, 2, and 3
Follow-up interviews
Westat staff will review pilot test survey responses and respondent contact logs completed by interviewers and the survey manager to identify approximately 20 institutions (or 20 percent of responding pilot test institutions) who will be asked to participate in a brief follow-up “debriefing” interview. Institutions will be selected based on the relevance of their response to addressing the pilot test objectives, including addressing new issues or problems that arise from pilot test data. These interviews will focus on respondents’ general reactions to the survey, burden associated with responding to the survey, and feedback on individual survey questions, especially those that seem problematic based on review of preliminary data. Descriptions of data problems or confusing elements of the questionnaire will be sought, and potential solutions and modifications will be discussed. This rich qualitative information will be a critical resource in developing a complete understanding of potential challenges that will face the full-scale survey administration and in improving the content of the questionnaire and survey administration procedures. It is important to note that follow-up interviews will add value above and beyond logs of communication with respondents, as the interview format allow for more focused and strategic communication designed to address key study issues and problems.
Follow-up interviews will begin after approximately 50 percent of cases have responded and initial survey data have been reviewed for potential problems. This approach will allow for interviews to proceed at the same time as survey data collection, saving time at the end of the pilot test for analysis and reporting. We believe that 20 is a reasonable number to contact for several reasons: it will enable us to talk to representatives of all institution types (public, private not-for-profit, private for-profit, two-year, four-year); we can explore problems that occurred in data collection across many different types of institutions and try to find resolutions; and the number is not so great that the qualitative analysis would take so much time that it would seriously delay the full-scale survey The Westat survey manager will conduct the interviews using an interview protocol designed to address data problems identified in the initial responses. A preliminary draft of a protocol to be used in follow-up interviews is included in Appendix C; this would be revised based on the types of problems observed in preliminary pilot test responses.
Used for Objectives 1, 2, and 3
Pilot test sample
The survey development process revealed that institutional policies for evaluating student need for developmental or remedial coursework vary significantly from institution to institution and that policies are highly variable across institution types. For example, two-year institutions often reported using single scores from placement tests such as ACCUPLACER to make placement decisions, while some four-year institutions reported more complex processes involving multiple scores and other non-test criteria, such as high school grades. Even within institution types, the recurring issue in survey development was the variable nature of placement policies. For example, some four-year institutions use a single set of evaluation standards for all new students, while others use different standards depending on the academic program of enrollment.
These findings suggested that a pilot test with a diverse sample of institutions would be a prudent means to more fully understand the issues that could affect the full-scale survey administration. Based on the high level of policy variation found in survey development, a pilot test with a sample of 120 two-year and four-year public, private not-for profit, and private for-profit institutions was recommended to explore issues found in survey development with a range of institution types. These institution types are commonly used in selecting nationally-representative survey samples of higher education institutions, but are also especially appropriate to this study given the policy variation seen across institution type in survey development. A sample of 120 institutions will yield approximately 100 completed surveys, assuming an 85 percent response rate. The goal of 100 responses is driven by a need to obtain survey data from a diverse range of institutions, with an adequate number of responses within institution types that are a key focus of the study. Obtaining 100 responses in a pilot test is often used as a general rule of thumb by Westat statisticians for the minimum sample size needed to achieve adequate variability in respondent types across the entire sample.
Table 1 below displays the proposed distribution of institution types within the pilot test sample. The allocation of institutions among the six subgroups was chosen to give greater emphasis to the 2- and 4-year public institutions and 4-year private non-profit institutions, where the greatest variation in testing policies and use of test scores is expected to occur. A smaller number of private not-for-profit two-year institutions are sampled due to the small overall number of these institutions in the sampling frame (92). Survey development work indicated that for-profit institutions tend to use internally-developed tests to evaluate student need for remediation, rather than the more common “off-the-shelf” tests such as ACCUPLACER that are of primary interest to NAGB. Assuming a response rate of 85 percent, this sample design should yield about 26 responding institutions in the main subgroups of interest, and at least nine responses in the remaining subgroups.
Table 1. Proposed target sample sizes for the pilot test, by control and level
Subgroup |
Number sampled |
Respondents* |
Total sample |
120 |
102 |
Public |
60 |
51 |
4-year |
30 |
26 |
2-year |
30 |
26 |
Private, not-for-profit |
40 |
34 |
4-year |
30 |
26 |
2-year |
10 |
9 |
Private, for-profit |
20 |
17 |
4-year |
10 |
9 |
2-year |
10 |
9 |
* Assumes 85 percent response rate.
Note: Details may not sum to total due to rounding.
The pilot test sample will be selected using a stratified sample design, with institutions selected at random within the six major subgroups described above. In order to ensure that the resulting sample includes a broad range of institution types, the frame of institutions within the major subgroups will be implicitly substratified by highest level of offering and enrollment size class prior to sampling. The implicit substratification will be accomplished by selecting a systematic random sample from the sorted frame of institutions. The resulting stratified sample is expected to capture the variation seen in survey development and provide greater insight into potential issues that could face the full-scale survey administration. These strata are also commonly employed in selecting samples of higher education institutions and will also be used in selecting the sample for the full-scale survey.
Analysis of pilot test data and transition to full-scale survey
As discussed in section 1.2 and 1.3 above, Westat will use a number of data sources to analyze pilot test results and inform development of plans, procedures, and materials for the full-scale survey administration. The analysis will focus on both issues related to survey content and issues related to administration of the questionnaire. For example, logs of communication with respondents will be reviewed for comments or problems related to the pilot test approach of fielding different questionnaires for two-year and four-year institutions. Westat will also explore this issue through follow-up interviews with selected survey respondents. Pilot test data, including responses in comment fields, will be examined in aggregate and by institution type, with particular attention paid to responses that suggest potential data problems for the full-scale survey. For example, responses to questions 2 and 6 will be analyzed for errors or comments pointing to the presence of variable score policies that prevent reporting of a single test score. Respondents with problematic data on these survey items will be represented in follow-up interviews to get a more detailed picture of the issues.
Westat will review interviewer logs to assess the performance of survey administration procedures. Interviewer summaries of how appropriate respondents were identified for each institution will be especially useful in assessing this key administration procedure. Interviewer logs will be analyzed in aggregate as well as across institution types to determine if special survey administration procedures need to be developed for a particular group of institutions. The logs will also assist in providing more thorough instructions and guidance for president’s office contacts for the full-scale survey administration.
Upon completion of analysis, Westat will produce a pilot test report laying out key findings on survey content and questionnaire administration procedures, as well as recommendations to NAGB about the full-scale survey. Westat and NAGB will confer on the pilot test findings and recommendations for the full-scale survey administration, after which Westat will produce a final decision memo. We anticipate that these activities will be completed 4 to 5 weeks after the pilot test data collection has ended.
Appendices provided as attachments
Appendix A: Survey interviewer training manual
Appendix B: Survey manager contact log
Appendix C: Draft protocol for follow-up interviews with 15-20 pilot test respondents
1 Individual data sources are described in detail in section 1.3.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Liam Ristow |
File Modified | 0000-00-00 |
File Created | 2021-02-01 |