NPSAS12 FS Memo

NPSAS12 FS Student Data Collection Memo.docx

2011-12 National Postsecondary Student Aid Study (NPSAS:12) Full Scale Lists and Contacting

NPSAS12 FS Memo

OMB: 1850-0666

Document [docx]
Download: docx | pdf


Memorandum United States Department of Education

Institute of Education Sciences

National Center for Education Statistics



DATE: October 28, 2011


TO: Shelly Martinez, OMB


THROUGH: Kashka Kubzdela

Office of the Commissioner, NCES


FROM: Tracy Hunt White

Postsecondary, Adult, and Career Education Division, NCES


SUBJECT: Summary of changes to the NPSAS:12 full-scale student interview and student records collection for Office of Management and Budget (OMB) forms clearance

OMB forms clearance (No. 1850-0666 v.8) was received in March 2011 for the field test student interview and student records collection activities of the 2011-12 National Postsecondary Student Aid Study (NPSAS:12). This memorandum summarizes changes between the full-scale and field test submissions and also presents results from field test experiments designed to address declining survey response rates and concerns about nonresponse.

Changes to Full-scale Methodology

We have implemented very few changes to the NPSAS:12 full-scale methodology relative to what was employed for the field test. Notably, the following changes have been made since the field test:

  • As described in the first OMB submission for NPSAS:12 institution contacting, the sizes of both the institution and student samples in the three for-profit strata will increase. We will also oversample certificate seeking first time, beginning college students (FTBs) and graduate students enrolled in science, technology, engineering, and mathematics (STEM) programs.

  • The process of identifying FTBs prior to sampling will be continued for the full-scale data collection with two modifications. First, in addition to matching to the National Student Loan Data System (NSLDS), we will also match to the Central Processing System (CPS) which contains all Free Applications for Federal Student Aid (FAFSA) data. Matching to CPS was piloted on a smaller scale during the field test, but the data proved helpful to identifying first time freshmen. Second, to contain costs, the process of matching potential FTBs to National Student Clearinghouse (NSC) records will be streamlined to target a subset of remaining potentially-eligible FTBs. As the budget allows, students over the age of 18 enrolled in public 2-year institutions and in any of the for-profit institutions who remain potential FTBs after matching to NSLDS and CPS will be sent to NSC. Further subsetting may be required, depending on the NSC costs per case (which are not fixed, but rather set based on the number of cases in each match).

  • A small number of items in the student interview have been added, modified, or dropped. These changes to the interview were a result of information learned in the field test and in cognitive interviewing, and were also based on recommendations received from the technical review panel during a meeting of the panel held on August 16 and 17, 2011.

Changes to NPSAS:12 methodology are highlighted in the accompanying clearance package.

Field Test Experiment Results


The NPSAS:12 field test included two data collection experiments. The first introduced an informational video during the initial contact with sample members to increase their participation in NPSAS and to establish “branding” for sample members selected for participation in the Beginning Postsecondary Students Longitudinal Study (BPS:12). The second experiment tested a new approach that uses incentives to minimize nonresponse bias among low responding groups rather than to simply increase response rates. Results from the two experiments, which were independent of each other, are summarized below.

Informational Video


A stop-action, informational video was developed using Lego blocks to communicate survey information to sample members. Half of the field test sample, selected at random, served as the control group. The control group received a set of contacting materials prior to and during data collection that included a standard data collection announcement letter, a study brochure, and follow-up emails and postcards to nonrespondents. The half of the sample randomly assigned to the experimental group received contacting materials that invited them to view the study video on YouTube.


Sample members who received the video link were expected to participate in the NPSAS:12 field test at higher rates than sample members who received study information conveyed only through the study brochure and other mailings. However, no statistically significant difference in rates was found between those informed about the video (64.0%) and those who were not (64.4%), either overall or by institution sector.


To test the second “branding” hypothesis, participation rates among BPS:12 field test cohort treatment and control groups will be compared during maintenance and interviewing activities. If no statistically significant differences are noted there, use of the video will be discontinued for the BPS first follow-up full-scale data collection. However, because the results of that test cannot be known a priori, and because BPS eligibility must be confirmed via subsequent interviewing, all NPSAS:12 full-scale cohort members will be exposed to the video in case benefits are found.

Response Propensity

The objective of the response propensity experiment was to reduce nonresponse bias through targeted use of incentives. Using data from NPSAS:04, we identified variables available prior to data collection which were predictive of response likelihood. These variables were used to estimate sample member’s response propensity. Sample members with a low response propensity were sorted at random into either a control group, which was offered the usual $30 incentive for participation, or an experimental group, which was offered $45. The high response propensity sample members were sorted at random into a control group that was offered $30 or an experimental group that was offered $15. Following data collection, we evaluated the predictive ability of the response propensity model and determined if bias is reduced in the experimental cases.


Review of the model’s predictive ability


The model used to assign propensity to cases in the NPSAS:12 FT was developed using NPSAS:04 data. As shown in Table 1, the odds ratios from the predictive model used to categorize cases corresponds well with the actual values from the NPSAS:12 field test data collection. The model successfully distinguished between high and low propensity cases in terms of response rate. The unweighted low propensity response rate was 57.7% and the unweighted high propensity response rate was 67.7%. This difference was statistically significant (χ2 = 42.003, p < .0001).


Table 1. Comparing NPSAS:04 Predictor Variable Estimates to Values in NPSAS:12


Predictor Variable

Odds Ratio
from NPSAS:04

Odds Ratio
from NPSAS:12

Four year institution

1.614

1.590

Public institution

1.092

0.731

Full time student status

1.051

1.142

First time beginner

1.004

1.046

Private for profit institution

0.755

0.377

Under graduate status

0.828

0.893

Doctoral student status

1.727

1.587

Mother is a college graduate

1.069

1.342

Father is a college graduate

1.127

1.033

Case is missing CPS data

0.621

0.732



Effects on bias of any differential response patterns


The primary goal of our response propensity approach was to reduce bias in key estimates. To determine if bias was reduced, we examined actual differences between the low propensity experimental and control groups across a range of variables. An example, focused on grant aid, appears in Table 2.

Table 2. Estimates of Substantive Variables by Low Propensity Group


 

Low Propensity Group

Variable

Treatment

Control

Percent Receiving an Employer Grant

8.4%

7.9%

Percent Receiving a Private Grant

22.4%

22.8%

Percent Receiving Any Grant

26.4%

26.4%


The results highlighted above are indicative of all of the key variables we examined. The weighted estimates in both the low propensity control and treatment groups are virtually identical, suggesting that differential incentives did not have any effect on bias in this variable.


Although this experiment was not designed to increase response rates per se, response rates by incentive amount within propensity groups were tested. Within the low propensity group, no statistically significant difference between experimental and control groups was noted χ2 = 2.527, p > .05). However, the difference observed between high propensity control and treatment groups was statistically significant, with lower incentives being associated with lower response rates (χ2 = 13.576, p < .001).


While paradata could not be included in the response propensity model (it was developed before data collection), we investigated four paradata variables after data collection. The goal was to determine how valuable paradata could be in predicting response propensity. The four variables examined were:

  • A positive match to the NCOA database meaning the sample member had an address change

  • The number of valid email addresses on file

  • If the sample member ever logged in and then quit at the beginning of the survey

  • If the sample member was ever far enough along to be considered a partial complete


The last two variables contained little variation so the analysis focused on NCOA matches and the number of emails on file. We first investigated how a match to the NCOA database and the number of valid email addresses could improve the predictive ability of our original model. The odds ratios showed that these variables would improve the predictive ability of the model. For a match to the NCOA database, the odds ratio predicting response outcome is .817 (confidence interval, .679 to .983). This odds ratio value indicated that a match to NCOA made a case significantly less likely to be a respondent. For the number of valid email addresses on file, the odds ratio predicting response outcome is 1.231 (confidence interval, 1.116 to 1.357). This odds ratio value indicated that the more email addresses on file the more likely the case was to be a respondent.


Of interest is also how these same variables impacted response among low propensity respondents. For low propensity respondents, 42.4 percent had multiple emails whereas 30.7 percent of low propensity nonrespondents had multiple emails. Differences evaluated based on NCOA match were not statistically significant. Therefore, if the response propensity approach were being proposed for the NPSAS:12 full-scale data collection, the number of email addresses on file could help to assign cases to high and low propensity groups.


Cost effectiveness of the tested approaches


In response to OMB concerns expressed during the field test clearance process, we analyzed costs of different incentive plans relative to the method employed in the field test. The approach used in the field test cost a total of $70,800 in incentives. Had all high propensity cases been given $30, rather than experimenting with a $15 incentive amount for cases likely to participate, incentive costs would have been $88,930. Had we given $50 to all students in private for profit institutions (the institutions which showed the lowest propensity rates overall) and $30 to students in all other institutions, incentive costs would have been $94,430. If we had given $30 to all four year institutions (those with the highest propensity rates overall) and $50 to everyone else, incentive costs would have been $107,670.


The terms of OMB clearance requested that a cost “analysis should include a separate discussion of differences in both number of cases that would have switched conditions and the cost difference, if the design had included offering the same incentive to all participants within a single institution…” To ensure that all low propensity cases receive the higher incentive amounts, all cases within an institution were set to “low propensity” if at least one low propensity case was identified at the institution. In total, 3,897 cases would be eligible for the low propensity incentive amounts ($30 for low propensity control; $45 for low propensity experimental) for an approximate cost of $146,145 (assigning entire institutions to either the experimental or control group). Of these cases, 2,497 were originally modeled to be high propensity and, therefore, would switch conditions into the low propensity group. Only 693 cases were high propensity, for a total cost of $15,585. Total cost of this field test incentive plan would have been $161,730.


Plans for full-scale data collection


Given the equivocal results of the response propensity experiment and the apparent lack of effectiveness of a monetary treatment, RTI International is proposing to simplify the incentive plan for the NPSAS:12 full-scale data collection: rather than offering different incentive amounts based on modeled propensity, a $30 incentive will be offered to all students. (Note that we did not receive any complaints about different incentive amounts being offered within the same institution during the field test.) As a result, no pre-data collection modeling of either responses or paradata would be required.


Because nonresponse still threatens to introduce bias in study estimates, RTI will customize their data collection approach according to response likelihood, or propensity. A review of prior NPSAS data collections suggests institution sector (e.g., public, 2-year) can serve as a reasonable proxy for response propensity, therefore RTI will customize their approach accordingly. For example, students in public, 4-year institutions, with historically higher response rates, will be approached via the typical data collection plan: three weeks of online-only interviewing followed by outbound calling to nonrespondents.  In contrast, students in institutions with historically lower response rates and a lower likelihood of responding online, will move immediately to outbound calling, shortening the time to initial contact and, if necessary, referral to intensive tracing. While all students will be offered the same $30 incentive and receive the same data collection materials (i.e., letters, postcards, emails), the timing and method of contacts, after the initial data collection announcement, will be determined for each sector separately, rather than applying the same approach to all students in all sectors. Other steps taken over the course of data collection will be similarly applied by sector.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleMemorandum United States Department of Education
Authoraudrey.pendleton
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy