Passback to OMB Comments

NPSAS12 FS Student Data Collection Passback Responses.docx

2011-12 National Postsecondary Student Aid Study (NPSAS:12) Full Scale Lists and Contacting

Passback to OMB Comments

OMB: 1850-0666

Document [docx]
Download: docx | pdf

Memorandum United States Department of Education

Institute of Education Sciences

National Center for Education Statistics



DATE: January 9, 2012

TO: Shelly Martinez, OMB

THROUGH: Kashka Kubzdela, Office of the Deputy Commissioner, NCES

FROM: Tracy Hunt White, NPSAS:12 Project Officer, NCES

SUBJECT: Responses to OMB Passback for the 2011-12 National Postsecondary Student Aid Study (NPSAS:12) Full-Scale Student Data Collection (OMB# 1850-0666 v.10)



Provided here are responses to OMB’s passback regarding the 2011-12 National Postsecondary Student Aid Study (NPSAS:12) Full Scale Student Data Collection clearance package (OMB# 1850-0666 v.10).


1. The memo summarizing changes for full scale indicates that "To contain costs, the process of matching potential FTBs to administrative records....will be streamlined to target only students over 18 enrolled in public 2-year institutions or in any of the for-profit institutions." Supporting Statement B (page 34) suggests that this limitation is only for the step of comparing to the National Clearinghouse versus all administrative records. Can you clarify which is correct? And what are the practical implications of this change?



Revisions to both the memorandum and to Supporting Statement B have been made and highlighted in the accompanying files using tracked changes. Prior to full-scale sampling, all potential first time beginning (FTB) students will be matched to the National Student Loan Data System (NSLDS), as was done in the field test, and to the Central Processing System (CPS), which was also done in the field test but on a limited scale as a pilot to determine the usefulness of the aid application data for this purpose. In the field test, any remaining potential FTBs were sent to NSC for matching.

Because of the cost per case of NSC matches, it will be impossible to match all potential FTBs to the NSC during full-scale data collection. Instead, a subset of the remaining students, specifically, those in sectors which have had high false positive rates in prior NPSAS rounds, will be sent to the NSC. At this time, we anticipate sending potential FTBs over the age of 18 in the public 2-year and for-profit sectors because these sectors had high false-positive rates in the field test and in NPSAS:04, and have large full-scale sample sizes. Additional targeting and subsetting may be needed depending on budget.

We expect that there could be an increase in the observed false positive rates as a result of this change, but in those sectors that have experienced relatively lower false positive rates historically. The impact of the reduced matching will be limited.



2. We appreciate the efforts to use the lessons learned from the propensity modeling work in the field test during full scale. However, it seems rather hastily inserted and we aren't sure what if anything we will learn and whether there could be any unintended consequences. We also remind NCES that while the stated purpose of the enhanced procedures is concerns about bias, the targeting it based on response rates. Given the limitations of the models predictive power, we can envision a scenario where bringing in the higher propensity students at these lower propensity schools could actually increase bias in estimates. One idea might be to roll out the 'enhanced' field procedures for only half of the low propensity schools, for example, so that comparisons can be made later.


NPSAS key estimates are inherently less subject to nonresponse bias than those of other studies. At its core, NPSAS is a study of who is participating in postsecondary education, where they are participating, and how that participation is being financed. Virtually all of the study’s key estimates, then, are either demographic, institutional, or financial, and with few exceptions those estimates are derived from administrative data sources. Further, the use of the study member definition, which has been the practice for the last two NPSAS rounds, rather than an interview-response based definition, further mitigates nonresponse bias on key estimates. Because almost all sampled NPSAS students become study members, there is little bias at the unit level. This further protects us against bias in key estimates, because key estimates are derived for virtually all units (study members) using sources of data other than a student interview. For NPSAS then, the problem is specifically with item-level nonresponse bias for interview-only items due to unit nonresponse.

We recognize that there is substantial risk to both NPSAS:12 and its longitudinal follow-up if we have low interview response in certain sectors. Because of increased interest in both for-profit and sub-baccalaureate education, NPSAS:12 has increased sample in these sectors. Yet it is these same sectors that are traditionally lowest responding. This may be a factor of the students themselves, but it is most certainly influenced by an idiosyncratic component of our data collection: because these institutions often do not follow an academic year calendar system, we receive most of their enrollment lists when data collection is virtually two-thirds complete.

Irrespective of the cause, we know enhanced efforts like those proposed in the clearance package will be required to garner students’ participation. Failure to do so may not only seriously lower the precision of estimates we are able to make using NPSAS data, but has consequences for our longitudinal follow-up studies: if first-time beginning students in these sectors cannot be induced to respond, we lose the ability for years to come to understand persistence outcomes via the Beginning Postsecondary Students Longitudinal Study (BPS). This apparent variance-bias trade-off must be carefully considered.

NCES is committed to improving quality and advancing the state of the art in sample surveys—our work to robustly implement responsive design (RD) in Baccalaureate and Beyond (B&B) is one current example. We considered the recommendation that only half of the low propensity cases be exposed to enhanced data collection procedures to allow comparisons to be made. However, because NPSAS cases are released on a flow basis, it is risky to implement any sort of experiment during full-scale data collection. There simply will not be sufficient time to correct the course of data collection for decreases in response rates resulting from a treatment, or absence of a treatment. However, NPSAS:12 does provide other ways to learn more about bias in the various sectors. Throughout full-scale data collection, RTI can monitor bias among respondents and nonrespondents to evaluate how much bias exists within a sector at any particular time point. While RTI would not be able to implement a responsive design to assess the impact of the NPSAS:12 data collection methodologies on bias over time, such an analysis will inform the design of NPSAS:16, and allow time for experimentation in the NPSAS:16 field test.


3. The cost analysis for the incentive experiment is not very useful without average cost per case or other data to allow us to interpret the overall effectiveness or cost of one approach over another. Is there more written on this that we can see?


RTI has calculated more detailed information on the costs. In the field test, sample members in the low and high propensity control groups were offered a $30 incentive, those in the low propensity experimental group were offered $45, and those in the high propensity experimental group were offered $15. The observed response rates and average costs per complete for each of the groups are shown in table 1. For each of the groups, the costs to obtain a completed interview, with and without incentives, were consistent. Without incentives, the higher incentive offers resulted in lowered average costs to complete the interview; with incentives added, the lower incentive offers resulted in lowered average costs.


In terms of response rates observed in the 4 groups, only the difference between the high propensity groups was statistically significantly different, suggesting that the offer of $30 helped to improve response rates over the $15 offer, while only raising the cost per complete by $6.89.


Table 1. Response rates and costs per case, with and without incentives, by experimental group


Propensity level


Group


Response rate

Cost per complete, with incentives

Cost per complete, without incentives

Low

Control ($30)

57.2%

$106.36

$72.69

Low

Experimental ($45)

60.2%

$111.91

$68.02

High

Control ($30)

71.6%*

$76.23

$46.68

High

Experimental ($15)

64.6%*

$69.34

$54.83

*Statistically significantly different at p<.05.



4. Please update SS A 2, "NPSAS12: Research and Policy Issues" as it seems out of date.


As requested, the research and policy issues section of supporting statement A has been updated. Revisions to the text have been highlighted in the accompanying files using tracked changes.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJennifer Sharp Wine, Ph.D.
File Modified0000-00-00
File Created2021-01-31

© 2025 OMB.report | Privacy Policy