B and B 2008-12 Field Test 2011 Part B

B and B 2008-12 Field Test 2011 Part B.docx

Baccalaureate and Beyond Longitudinal Study, 2008/12 (B&B:08/12) Field Test 2011

OMB: 1850-0729

Document [docx]
Download: docx | pdf












2008/12 Baccalaureate and Beyond

Longitudinal Study: (B&B:08/12)

Field-test 2011




Supporting Statement Part B

Request for OMB Review

OMB # 1850-0729 v.7









Submitted by

National Center for Education Statistics

U.S. Department of Education



June 23, 2011

Contents



List of Tables

Table Page

Table 8. Unweighted counts and percentages of sampled, eligible, and participating NPSAS:08 field-test institutions, by sampling stratum: 2007 3

Table 9. Expected and actual student samples for NPSAS:08 field-test, by student type and level of institution: 2007 4

Table 10. Distribution of the B&B:08/09 field-test sample by NPSAS:08 field-test response

status and B&B eligibility 5

Table 11. Determination of the B&B:08/12 field-test sample from the B&B:08/09 field-test sample 6

Table Page

Table 12. Distribution of the B&B:08/12 field-test sample by field-test interview response status for NPSAS:08 and B&B:08/09 6

Table 13. Distribution of the B&B:08/09 full-scale sample by interview response status for NPSAS:08 and B&B:08/09 7

Table 14. Distribution of response propensity scores for the B&B:08/12 field-test sample 21

Table 15. Description summary of response propensity scores for the B&B:08/12 field-test sample 21

Table 16. Field-test – estimated number of eligible sample members, by propensity level and

Incentive amount 23

Table 17. Summary for Response Propensity Distribution for the B&B:08/09 full-scale sample 26

Table 18. Student nonresponse bias before nonresponse adjustment and after weight adjustments

For selected variables: 2009 27

Table 19. Student nonresponse bias before nonresponse adjustment for selected

variables for – low-propensity cases treated as nonrespondents 29

Table 20. Student nonresponse bias before nonresponse adjustment for selected – medium and low-

propensity cases treated as nonrespondents 31

Table 21. Summary of student interview nonresponse bias analysis, overall and with low and medium

Propensity cases treated as nonrespondents 33

Table 22. Level of Effort Measures, by Response Propensity, B&B:08/09 Full-scale study 34

Table 23. Detectable differences for field-test experiment hypothesis 39


List of Figures

Figure Page

Figure 1. Candidate Variables for Response Propensity Modeling 18

Figure 2. Predicted Response Propensity Scores – Cumulative Density Function (cdf) 19

Figure 3 Assignment of sample cases to experimental conditions 36

  1. Collection of Information Employing Statistical Methods

The following section presents the sample design, data collection procedures, and proposed experiments for the B&B:08/12 field-test study.

    1. Sampling Specifications and Study Design

The B&B:08/12 field-test sample design has four stages. The first two stages occurred during the NPSAS:08 field-test study, when samples of NPSAS-eligible institutions and students within institutions were selected. The third stage was in the first follow-up, when all confirmed and potential baccalaureate recipients from NPSAS:08 were included in the B&B:08/09 field-test sample. The fourth stage is for the second follow-up, and we recommend that all eligible sample members from B&B:08/09 (as determined by the B&B:08/09 interview and the transcripts) be included in the B&B:08/12 field-test.

    1. Target Population

To define the target population for the B&B field-test, both eligible institutions and eligible students within these institutions need to be defined. Eligible institutions are those that satisfied the NPSAS eligibility criteria in the 2006–07 academic year. An institution must have

  • offered an educational program designed for individuals who had completed secondary education;

  • offered at least one academic, occupational, or vocational program of study lasting at least 3 months or 300 clock hours;

  • offered courses open to more individuals than the employees or members of the company or group (e.g., union) that administered the institution;

  • been located in the 50 states, the District of Columbia, or Puerto Rico;

  • not been a U.S. service academy; and

  • signed a Title IV participation agreement with ED.

Eligible persons for the field-test are individuals who completed requirements for the bachelor’s degree from eligible institutions between July 1, 2006, and June 30, 2007, and who were awarded their bachelor’s degree by the institution from which they were sampled no later than June 30, 2008. This provides theoretically complete coverage of the population of students completing baccalaureate degree requirements during the NPSAS year because every graduate was associated with a 4-year institution on the NPSAS sampling frame. Moreover, it provides a known and well-defined probability of selection for each student in the B&B sample. Each graduate had exactly one linkage to the B&B sampling frame, which was through the institution awarding the degree. Hence, although NPSAS sample weights must include a multiplicity adjustment to account for multiple linkages to the NPSAS sampling frame, the B&B sample weights do not include a multiplicity adjustment because each B&B-eligible student has only one linkage to the B&B sampling frame.

    1. NPSAS:08 Field-test Sample

The first stage of the NPSAS:08 sample was a stratified sample of institutions, with a sampling frame derived from 2004–05 Integrated Postsecondary Education Data System files. A total of 1,629 of the 6,610 institutions were initially selected into the full-scale sample.1Institutions with a high proportion of their baccalaureate degrees awarded in education were oversampled as a way to oversample baccalaureate recipients who go on to teach, which is an important analysis domain for B&B:08/12. Then, a purposive sample of 300 institutions was selected for the NPSAS field-test from the complement of the institutions initially selected for the full-scale study in order to yield a non-overlapping sample for the field-test and full-scale study and to eliminate the possibility of an institution being burdened with participation in both the field-test and full-scale studies.2 Because of the limited size of the NPSAS:08 field-test institution sample and the need to ensure sufficient baccalaureate recipients for the B&B follow-up field-tests, the NPSAS:08 field-test sample included a higher percentage of 4-year institutions than the full-scale sample. Public 4-year doctorate-granting institutions were designated as certainty institutions and automatically included in the full-scale sample; therefore, they were excluded from the field-test sample. Table 8 shows the counts and percentages of sampled, eligible, and participating NPSAS:08 field-test institutions.

The second stage of the NPSAS:08 field-test sample was a stratified, systematic sample of individuals within sampled institutions. There were seven student strata: baccalaureate business, baccalaureate non-business, other undergraduate, master’s, doctoral, other graduate, and first-professional. The information needed to identify students within these strata was provided by the sampled institutions. The sample included a large number of potential baccalaureate recipients to provide sufficient sample size for the B&B:08/09 and B&B:08/12 field-tests. Given that institutions were asked to identify potential bachelor’s degree recipients before degree completion, the identification of those who would actually complete the degree was expected to be somewhat inaccurate. Therefore, the NPSAS sampling rates for those identified by the sample institutions as potential baccalaureate recipients and other undergraduate students were adjusted to account for expected false-positive rates. Because of the high proportion of business majors, students receiving a baccalaureate degree in business were placed in a separate stratum so that they would be selected at a lower sampling rate than other baccalaureate recipients. Sampling business majors at the same rate as other baccalaureate recipients would have resulted in inclusion of more business majors than desired. Table 9 shows the expected and actual student samples for the NPSAS:08 field-test.

Table 8. Unweighted counts and percentages of sampled, eligible, and participating NPSAS:08 field-test institutions, by sampling stratum: 2007

Institutional sampling stratum

Sampled institutions

Eligible institutions


Provided lists


Past NPSAS participant

Number

Percent1


Number

Percent2


Number

Percent2

All institutions

300

300

99.3


270

89.7


200

65.2











Public










Less-than-2-year

#

#

100.0


#

100.0


#

#

2-year

10

10

100.0


10

100.0


10

62.5

4-year non-doctorate-granting

100

100

100.0


100

93.3


80

76.0

4-year doctorate-granting3

#

#

#


#

#


#

#











Private not-for-profit










Less-than-4-year

#

#

75.0


#

33.3


#

33.3

4-year non-doctorate-granting

140

130

99.3


120

91.8


80

59.0

4-year doctorate-granting

30

30

100.0


30

84.8


30

87.9











Private for-profit










Less-than-2-year

10

10

100.0


#

57.1


#

#

2-year-or-more

10

10

100.0


10

66.7


#

44.4

# Rounds to zero.

1Percentage is based on number of sampled institutions within row.

2Percentage is based on number of eligible institutions within row.

3All institutions in this category are included in the full-scale sample with certainty and not included in the field-test study.

NOTE: Detail may not sum to totals because of rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 200708 National Postsecondary Student Aid Study (NPSAS:08) Field-test.

Table 9. Expected and actual student samples for NPSAS:08 field-test, by student type and level of institution: 2007

Student type and level of institution

Student sample size

Expected1

Actual

Total

3,000

3,000




Potential bachelor’s recipient

2,400

2,460

Less-than-2-year

#

#

2-year

#

#

4-year

2,400

2,450




Other undergraduate

500

430

Less-than-2-year

120

80

2-year

40

50

4-year

340

300




4-year

100

120

Master’s

50

80

Doctor’s

30

20

Other graduate

10

10

First-professional

20

10

# Rounds to zero.

1 Based on sampling rates, using the 2004–05 Integrated Postsecondary Education Data System (IPEDS) header, Fall Enrollment, and Completion files counts.

NOTE: Detail may not sum to totals because of rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2008 National Postsecondary Student Aid Study (NPSAS:08) Field-test.

    1. B&B:08/09 Field-test Sample

The field-test sample for B&B:08/09 consisted of all interview respondents from the NPSAS:08 field-test who completed requirements for their bachelor’s degree at any time between July 1, 2006, and June 30, 2007. Additionally, we included all potentially eligible interview nonrespondents in the field-test sample (e.g., those who were identified as potential baccalaureate recipients by their NPSAS institution on the enrollment list or student record data but did not complete a student interview to confirm their status).

As part of the field-test data collection, to inform the full-scale sample design we assessed the value in determining cohort eligibility of each independent data source, including the initial institution listing; report of degree receipt in the NPSAS:08 and B&B:08/09 interviews; transcripts; and data obtained from administrative data sources, when available, such as ED’s Central Processing System (CPS) and the NSC Tracker database.

The NPSAS:08 field-test yielded 1,220 interview respondents who were confirmed to be bachelor’s degree recipients. The base-year sample also included 599 interview nonrespondents who were either identified as potential bachelor’s degree recipients, according to the initial classification by the NPSAS sample institution at the time of student sampling (prior to base-year data collection), or were classified as such in the institutional student records. Therefore, the total B&B:08/09 field-test sample size was 1,819 participants. See table 10 for the distribution of the B&B:08/09 sample by NPSAS:08 response status and B&B eligibility.

Table 10. Distribution of the B&B:08/09 field-test sample by NPSAS:08 field-test response status and B&B eligibility

NPSAS:08 field-test study status

NPSAS:08 field-test interview status

B&B eligibility

Count

Total



1,819





Study respondent

Interview respondent

Baccalaureate receipt confirmed in interview

1,220

Study respondent

Interview nonrespondent

Baccalaureate receipt confirmed in student records

406

Study respondent

Interview nonrespondent

Listed as potential baccalaureate recipient

159

Study nonrespondent

Interview nonrespondent

Baccalaureate receipt confirmed in student records

8

Study nonrespondent

Interview nonrespondent

Listed as potential baccalaureate recipient

26

NOTE: B&B:08/09 = 2008/09 Baccalaureate and Beyond Longitudinal Study; NPSAS:08 = 2007–08 National Postsecondary Student Aid Study.

    1. B&B:08/12 Field-test Sample

The field-test sample will include 1,588 sample members. To determine this sample size, we started with the B&B:08/09 field-test sample and excluded ineligible and deceased cases. Sample members who, during previous contacting attempts, specifically requested never to be called again will not be contacted, but will be counted as B&B:08/12 nonrespondents when computing the response rate. Table11 shows the determination of the sample size. The distribution of this sample by prior response status is shown in Table12.

Response rates among sample members who responded to the previous survey are generally fairly high. However, the B&B:08/12 field-test sample includes some sample members who were nonrespondents to the first follow-up and/or the base year study, and experience suggests that the response rates among these sample members will be very low. Due to the limited amount of time to pursue difficult cases in the field-test, the yield is expected to be at least 900 interviews (a response rate of about 57 percent). Our proposed field-test experiments (described in Section B.8) will provide an opportunity to evaluate whether nonresponse among prior-round nonrespondents, and the resulting bias, can be minimized. While the response rate may be as low as 57 percent, a yield of 900 cases will be sufficient for evaluating field-test results and for providing a sufficient sample size for any future follow-up field-test studies of the B&B:08 cohort.

Table 11. Determination of the B&B:08/12 field-test sample from the B&B:08/09 field-test sample

B&B:08/09 status

Count

Total

1,819



Eligible

1,588

Hostile refusal

5

Ineligible

226

NOTE: B&B:08/09 = 2008/09 Baccalaureate and Beyond Longitudinal Study; B&B:08/12 = 2008/12 Baccalaureate and Beyond Longitudinal Study.

Table 12. Distribution of the B&B:08/12 field-test sample by field-test interview response status for NPSAS:08 and B&B:08/09

NPSAS:08 field-test interview status

B&B:08/09 field-test interview status

Count

Total


1,588




Respondent

Respondent

936

Respondent

Nonrespondent

216

Nonrespondent

Respondent

217

Nonrespondent

Nonrespondent

219

NOTE: B&B:08/09 = 2008/09 Baccalaureate and Beyond Longitudinal Study; B&B:08/12 = 2008/12 Baccalaureate and Beyond Longitudinal Study; NPSAS:08 = 2007–08 National Postsecondary Student Aid Study.

Reinterviews.

Reinterviews will be conducted approximately 3 to 4 weeks after the initial interview and will contain a subset of items (either new items or those that have been difficult to administer in the past). Reinterviews will be conducted in the same administration mode as the initial interview. A subsample of about 300 interview respondents will be randomly selected to be reinterviewed[1] to enable analysis of the reliability of selected items in the field-test instrument. Because of the constraints of budget and schedule, reinterviews have typically been conducted with starting samples of about 300 students with the expectation that at least 80 percent will participate.  Even if we do not meet the 80% response rate, we have in past studies been able to conduct analyses with a 67% response rate (NPSAS:04) and a 47% response rate (NPSAS:08). The response rate for the B&B:08/09 field-test reinterview was 71%. 

    1. B&B:08/12 Full-Scale Sample

The sample design for the B&B:08/12 full-scale study will be determined after an evaluation of the field-test results. There are several sample design options and issues to consider, along with the field-test results, for the full-scale, and these considerations are described in this section.

The B&B:08/09 full-scale study included 18,497 sample members and consisted of students who were confirmed to be baccalaureate recipients in the NPSAS:08 interview as well as a subsample of potential baccalaureate recipients who were not interviewed in NPSAS:08. More details of the B&B:08/09 and NPSAS:08 full-scale sample designs will be provided in the full-scale sampling specifications. There were three types of nonrespondents in B&B:08/09:

  • a student who responded to the NPSAS:08 interview but did not respond to the B&B:08/09 interview (referred to henceforth as a first follow-up nonrespondent);

  • a student who did not respond to the NPSAS:08 interview but did respond to the B&B:08/09 interview (referred to henceforth as a base-year nonrespondent); and

  • a student who did not respond to either the NPSAS:08 or B&B:08/09 interviews (referred to henceforth as a double nonrespondent).

Table13 shows the distribution of the B&B:08/09 full-scale sample by prior response status.

Table 13. Distribution of the B&B:08/09 full-scale sample by interview response status for NPSAS:08 and B&B:08/09

NPSAS:08 full-scale interview status

B&B:08/09 full-scale interview status

Count

Total


17,164




Respondent

Respondent

14,825

Respondent

Nonrespondent

1,883

Nonrespondent

Respondent

223

Nonrespondent

Nonrespondent

233

NOTE: Many of the NPSAS:08 interview nonrespondents were study respondents and therefore have some NPSAS data. B&B:08/09 = 2008/09 Baccalaureate and Beyond Longitudinal Study; NPSAS:08 = 2007–08 National Postsecondary Student Aid Study.

Some alternative sample designs to consider for the full-scale study follow:

  • include all B&B:08/09 eligible sample members;

  • include all B&B:08/09 interview respondents, and include a subsample of first follow-up and double nonrespondents;

  • include all B&B:08/09 interview respondents, all first follow-up nonrespondents, and a subsample of double nonrespondents;

  • include all B&B:08/09 interview respondents and a subsample of first follow-up nonrespondents, and exclude all double nonrespondents; and

  • include all B&B:08/09 interview respondents and all first follow-up nonrespondents, and exclude all double nonrespondents.

NCES longitudinal surveys have taken different approaches to sampling nonrespondents in the follow-up studies. For example, BPS and previous rounds of B&B have typically included either all nonrespondents or a subsample of the various types of nonrespondents. For the Early Childhood Longitudinal Study–Birth Cohort (ECLS-B) and the Early Childhood Longitudinal Study, Kindergarten Class of 1988–89 (ECLS-K), follow-up sample members had to be base-year respondents, and for the Education Longitudinal Study of 2002 (ELS:2002), nonrespondents to both the base-year and first follow-up studies were excluded from the second follow-up study but counted as nonrespondents.

Interviewing first follow-up nonrespondents and double nonrespondents will likely be difficult and cost more per case than interviewing B&B:08/09 respondents, and the response rate among prior nonrespondents may be low. To decide if it is worth the time, effort, and cost to attempt interviews with these nonrespondents, we will need to look at the effects of subsampling nonrespondents and excluding double nonrespondents on nonresponse bias, design effects, and analysis.

Nonresponse bias can potentially occur when respondents and nonrespondents are different—that is, have different characteristics. As part of the B&B:08/09 weighting process, a student nonresponse bias analysis was conducted, and nonresponse bias did exist. Nonresponse weighting adjustments were done, which reduced the bias. While that bias analysis compared all nonrespondents (both first follow-up nonrespondents and double nonrespondents) with respondents, we have also conducted bias analyses comparing the double nonrespondents with B&B:08/09 respondents and with first follow-up nonrespondents. These additional analyses also indicate that bias exists, which means that the double nonrespondents are different from the B&B:08/09 respondents and first follow-up nonrespondents. While weight adjustments in B&B:08/12 could adjust for this bias even if the double nonrespondents are excluded, it may be preferable to include some or all of them in the sample so that those who do respond would provide data to strengthen the nonresponse model.

Any subsampling that is done affects the unequal weighting effect (uwe), which is a component of the design effect (deff). Subsampling would increase the design weights of the subsampled cases and likely cause their weights to be much different from the weights for the other sample members. This would cause the variance to increase. While trimming and smoothing of the weights is frequently done to reduce the uwe, it may be preferable to either not subsample or to instead subsample at a high rate, rather than to introduce a large uwe. For example, subsampling a tenth of the nonrespondents would result in weights for the subsampled cases 10 times higher than their initial weight, but a subsample of half of the nonrespondents would result in weights for the subsampled cases only 2 times higher than their initial weight.

Another important factor to consider is the analytical use of the data. Including all or a subsample of prior nonrespondents in the sample may provide better data, given the potential bias of the first follow-up nonrespondents and the double nonrespondents. However, these nonrespondents would not be analyzed independently from the other sample members, so weight adjustments could be sufficient.

Additionally, including prior nonrespondents will have implications for imputation. In B&B:08/09, data were imputed for NPSAS:08 variables that were missing for some B&B cases because they

  • were NPSAS study nonrespondents but B&B interview respondents;

  • were identified in NPSAS as graduate students; or

  • were not identified in NPSAS as potential B&B cases.

For the first follow-up nonrespondents, it will need to be decided if any of their missing B&B:08/09 data would need to be imputed, and if double nonrespondents are included in the sample it will need to be decided if any of their NPSAS:08 and B&B:08/09 data would need to be imputed. What data need to be imputed will depend on the planned analyses and what are considered key variables. NCES Statistical Standard 4-1-2 states:

Key variables in data sets used for cross-sectional estimates must be imputed (beyond overall mean imputation). This applies to cross-sectional data sets and to data from longitudinal data sets that are used to produce cross-sectional estimates (i.e., base year and subsequent freshened samples).”

Another analytical consideration is how the transcript data will be used for B&B:08/12 analyses and what transcript panel weights may be necessary. Some of the first follow-up nonrespondents and double nonrespondents have transcript data and will be included on the B&B:08/09 transcript file but not the interview file.

If it is decided to exclude double nonrespondents, they will still count as nonrespondents in computing the response rate. In ELS:2002 two response rates were computed:

  • a conditional response rate based only on the fielded cases; and

  • an unconditional response rate including the double nonrespondents as nonrespondents.

The first one was used to report data collection results, and the second was used to determine when nonresponse bias analysis was necessary and to compute nonresponse weight adjustments.3

In conclusion, after the field-test is completed, the results will be used to help evaluate the sample design pros and cons mentioned in this section and to determine a reasonable sample design for the full-scale study. Full-scale sampling specifications will document the agreed-upon sample design.

    1. Methods for Maximizing Response Rates

Response rates in the B&B:08/12 field-test and full-scale data collections are a function of success in two basic activities: identifying and locating the sample members involved, and then contacting them and gaining their cooperation.

      1. Student Locating

One of the main issues for the B&B:08/12 data collection effort will be locating the members of the sample cohort. Panel maintenance activities for the B&B:08/12 field-test sample were conducted in fall and winter 2010. Previously, contacting was attempted for this cohort during the B&B:08/09 study year, but NPSAS:08 and B&B:08/09 nonrespondents may have never been contacted. The sample members’ high mobility rate presents challenges to the B&B:08/12 locating effort.

To maximize our location rate, adequate resources will be devoted to locating efforts, giving careful consideration to identifying and implementing the most effective, yet cost-efficient, locating strategies for this population. The locator database for the cohort includes critical tracing information for most of the sample members, including their previous residences and telephone numbers. Moreover, Social Security numbers will be available for virtually all of the sample members (99 percent), as well as other information useful for tracing.

We propose a multistage locating approach that will capitalize on the availability of previous NPSAS:08 and B&B:08/09 locating data and the continuing cooperation of sample members. This multistage approach will consist of several steps designed to locate the maximum number of sample members with the least expense. During the field-test, we will evaluate the effectiveness of these procedures for the full-scale survey effort. The steps of our multistage locating and tracing plan include the following elements.

  1. Advance Tracing. We propose advanced tracing operations prior to field-test and full-scale data collection to update the addresses of sample members, as needed. Included in this activity will be searches of ED’s CPS and NSLDS for information on financial aid recipients. We will also conduct searches of other databases, including the National Change of Address (NCOA), FastData Phone Append, and ComServ’s Death Information System. We will compare all sample member addresses obtained from the B&B:08/09 locator database against the NCOA and Telematch databases to identify sample members who have moved since the previous follow-up.

  2. Advance Interactive Tracing. After the completion of advance tracing, cases without good locating information (primarily cases from the NPSAS:08 and or B&B:08/09 nonrespondent group) will be directed for additional interactive tracing. Specially trained tracing staff will perform intensive tracing to locate additional contact information for these cases. In many cases, this will involve the use of an interactive database, such as the credit bureau.

  3. Initial Contact Mailing to Sample Members. In May 2011, we will send an address update mailing to all field-test sample members. Sample members will be asked to update their contact information via the Web. Undeliverable mailings to sample members will be recorded, and the next best address will be used to resend the materials. Once all potential addresses for the sample member are exhausted, we will contact other information sources for the sample member (e.g., a parent, other relative, or a designated contact).

  4. Data Collection Announcement Mailing to Sample Members. With the most-current locator information for the sample members, we will mail a package announcing the start of data collection (June 2011). The package will include information about the study and will describe the various ways to complete the interview. The package will also include the website address for the project and the sample member’s unique username and password for the site. Emails and letters providing similar content will be sent throughout data collection to encourage participation.  In addition, sample members that request follow-up reminders via text message will receive text message prompts to complete the interview.

  5. Parent Mailing. In mid-September 2011, we will mail a letter to the parents of sample members for whom we have obtained no other contacting information. This letter will inform the parents that their child’s participation will be requested and will include a study brochure, address update form, and a business reply envelope.

  6. Intensive In-house Tracing. The goal of intensive tracing is to obtain a telephone number at which the sample member can be reached so that field interviewing will not be required. Tracing procedures may include (1) Directory Assistance for telephone listings at various addresses, (2) criss-cross directories to identify (and contact) the neighbors of sample members, (3) calling persons with the same unusual surname in small towns or rural areas to see if they are related to or know the sample member, and (4) contacting the current or last known residential sources, such as the neighbors, landlords, and current residents of the last known address. Other, more-intensive tracing activities could include (1) database checks for sample members, parents, and other contact persons; (2) credit database and insurance database searches; (3) drivers’ license searches through the appropriate state department of motor vehicles; (4) calls to colleges, military establishments, and correctional facilities to follow up on leads generated from other sources; (5) calls to alumni offices and associations; and (6) calls to state trade and professional associations, based on information about field of study in school and other leads.

  7. Field Tracing and Interviewing. We will use a two-tiered tracing strategy for field cases that could not be completed through self-administered web interview, telephone interviewing, or field interviewing. Using the best available address for the nonresponding sample members, cases will be clustered into geographic areas. Field interviewers will be assigned areas with a high concentration of sample members (e.g., a major metropolitan area). Field interviewers will be assigned to locate and interview the sample members residing in that cluster. Cases in areas without assigned field interviewers (e.g., cases not clustered with other cases) will be assigned to receive additional intensive tracing. Cases where additional telephone contact information is collected will be returned for data collection by telephone.

      1. Student Data Collection: Self-Administered Web, Telephone, and Field Interviewing

Training Procedures

Training will be provided for individuals working in survey data collection and will include critical quality control elements. We will establish thorough selection criteria for help desk operators, telephone interviewers, and field interviewers to ensure that only highly capable persons—those with exceptional computer, problem-solving, and communication skills—are selected to serve on the project and will contribute to the quality of the B&B data.

Contractor staff with extensive experience in training interviewers will prepare the B&B Telephone Interviewer Manual, which will provide detailed coverage of the background and purpose of B&B, sample design, questionnaire, and procedures for the telephone interview. This manual will be used in training and as a reference during interviewing. Training staff will also prepare training exercises, mock interviews (specially constructed to highlight the potential of definitional and response problems), and other training aids.

Student Interviews

Student interviews will be conducted using a single web-based survey instrument for self-administered, telephone and field data collection. The data collection activities will be accomplished through the Case Management System (CMS), which is equipped with the following capabilities:

  • online access to locating information and histories of locating efforts for each case;

  • state-of-the-art questionnaire administration module with full front-end cleaning capabilities (i.e., editing as information is obtained from respondents);

  • sample management module for tracking case progress and status; and

  • automated scheduling module, which delivers cases to interviewers and incorporates the following features:

  • Automatic delivery of appointment and call-back cases at specified times.

  • Sorting of non-appointment cases according to parameters and priorities set by project staff.

  • Restriction on allowable interviewers. Complete records of calls and tracking of all previous outcomes. Flagging of problem cases for supervisor action or supervisor review. Complete reporting capabilities.

The integration of these capabilities reduces the number of discrete stages required in data collection and data preparation activities and increases capabilities for immediate error reconciliation, which results in better data quality and reduced cost. Overall, the scheduler provides a highly efficient case assignment and delivery function by reducing supervisory and clerical time, improving execution on the part of interviewers and supervisors by automatically monitoring appointments and callbacks, and reducing variation in implementing survey priorities and objectives.

In addition to the management aspect of data collection, the survey instrument is another component designed to maximize efficiency and yield high-quality data. Below are some of the basic questionnaire administration features of the web-based instrument:

  • Based on responses to previous questions, the respondent or interviewer is automatically routed to the next appropriate question, according to pre-specified skip patterns.

  • The web-based interview automatically inserts “text substitutions” or “text fills” where alternate wording is appropriate, depending on the characteristics of the respondent or his/her responses to previous questions.

  • The web-based interview can incorporate or preload data about the individual respondent from outside sources (e.g., previous interviews, sample frame files, etc.). Such data are often used to drive skip patterns or define text substitutions. In some cases, the information is presented to the respondent for verification or to reconcile inconsistencies.

  • Numerous question-specific probes may be incorporated to explore unusual responses for reconciliation with the respondent, to probe “don’t know” responses as a way of reducing item nonresponse or to clarify inconsistencies across questions.

  • Coding of multilevel variables. The web-based instrument uses an assisted coding mechanism to code text strings provided by respondents. Drawing from a database of potential codes, the assisted coder derives a list of options from which the interviewer or respondent can choose an appropriate code (or codes, if it is a multilevel variable with general, specific, and/or detail components) corresponding to the text string.

  • Iterations. When identical sets of questions will be repeated for an unidentified number of entities, such as children, jobs, schools, and so on, the system allows respondents to cycle through these questions as often as is needed.

Refusal Aversion and Conversion

Recognizing and avoiding refusals is important to maximize the response rate. Supervisors will monitor interviewers intensely during the early data collection and provide retraining as necessary. In addition, supervisors will review daily interviewer production reports to identify and retrain any interviewers who are experiencing unacceptable numbers of refusals or other problems.

After encountering a refusal, comments are entered into the CMS record that include all pertinent data regarding the refusal situation, including any unusual circumstances and any reasons given by the sample member for refusing. Supervisors will review these comments to determine what action to take with each refusal. No refusal or partial interview will be coded as final without supervisory review and approval. In completing the review, the supervisor will consider all available information about the case and will initiate appropriate action.

If a follow-up is clearly inappropriate (e.g., there are extenuating circumstances, such as illness or the sample member firmly requested no further contact), the case will be coded as final and no additional contact will be made. If the case appears to be a “soft” refusal, follow-up will be assigned to an interviewer other than the one who received the initial refusal. The case will be assigned to a member of a special refusal conversion team made up of interviewers who have proven especially skilled at converting refusals.

Refusal conversion efforts will be delayed for at least 1 week to give the respondent sometime after the initial refusal. Attempts at refusal conversion will not be made with individuals who become verbally aggressive or who threaten to take legal or other action. Refusal conversion efforts will not be conducted to a degree that would constitute harassment. We will respect a sample member’s right to decide not to participate and will not impinge this right by carrying conversion efforts beyond the bounds of propriety.

      1. Quality Control

As a quality control measure throughout the field-test and full-scale data collections, interviewer monitoring will be conducted using RTI’s Quality Evaluation System—QUEST. QUEST is a system developed by a team of RTI researchers, methodologists, and operations staff focusing on quality monitoring best practices to develop standardized monitoring protocols, performance measures, evaluation criteria, reports, and appropriate systems security controls. It is a comprehensive performance quality monitoring system that includes standard systems and procedures for all phases of quality monitoring, including obtaining respondent consent for recording, procedures for interviewing respondents who refuse consent and for monitoring refusals at the interviewer level; sampling of completed interviews by interviewer, evaluating interviewer performance; maintaining an online database of interviewer performance data; and addressing potential problems through supplemental training. These systems and procedures are based on “best practices” identified by RTI in the course of conducting thousands of survey research projects.

RTI will use QUEST to monitor approximately 10 percent of completed B&B:08/12 interviews. Recorded interviews will be reviewed by call center supervisors for key elements such as professionalism and presentation; case management and refusal conversion; and reading, probing, and keying skills. Any problems observed during the interview will be documented on problem reports generated by QUEST. Both positive and constructive feedback will be provided to interviewers and patterns of poor performance (e.g., failure to use conversational interviewing techniques, failure to probe, etc.) will be carefully monitored and noted in the form of feedback that will be provided to the interviewers. As needed, interviewers will receive supplemental training in areas where deficiencies are noted. In all cases, sample members will be notified that the interview may be monitored by supervisory staff.

    1. Tests of Procedures and Methods

Two experiments are planned for the B&B:08/12 field-test. The first will evaluate whether the use of an informational video to describe the study has any impact on response rates. The second experiment will evaluate the use of an approach designed to model response propensity and target cases with low likelihood of response, with the goal of improving weighted response rates, minimizing nonresponse bias, and improving data quality. Both experiments are described in more detail below.

      1. Experiment #1: Increasing Survey Participation Using Informational Video

In a prior clearance package, we received permission (approved 8/18/2010) to test whether a short informational Lego video increased a sample member’s likelihood of visiting a website to confirm or update locating information4. We plan to continue this experiment with a second Lego video to encourage sample members to complete the B&B:08/12 survey. Video is an effective and popular form of communication, and viewing short online videos through sites such as YouTube is commonplace among traditionally-aged recent college graduates. The video is intended to be entertaining while explaining why it is important for sample members to complete the survey and how to do so.

We propose to extend the previous experimental design to include a second treatment; which will allow us to evaluate the effectiveness of multiple exposures to informational videos on interview participation rates. Sample members will be randomly assigned to control and treatment groups for the interview invitation video within the control and treatments groups used for the panel maintenance video experiment. The interview treatment group will receive a link to the video with the data collection announcement (described above), and with subsequent reminders. The control group will receive the study materials without the video link. This design will allow examination of the effectiveness of the video for improving interview participation while taking into account effects of the first video experiment and will allow the impact of the interview invitation video to be tested conditionally within the address update video groups; that is, the four cells created by the interaction of the two experiments can be evaluated (e.g., control 1 vs. control 2, control 1 vs. experiment 2, experiment 1 vs. control 2, and experiment 1 vs. experiment 2).

  1. Experiment #2: Response Propensity Approach

Nonresponse bias in sample surveys can lead to inaccurate estimates and compromise data quality. In the B&B:08/12 field-test, we plan to test a new methodology, developed by RTI, that will minimize nonresponse bias by targeting cases that have a low likelihood of responding and a high likelihood of contributing to nonresponse bias5. We describe this methodology, and our plans for conducting an experimental evaluation of its effectiveness, in this section.

Survey organizations commonly address nonresponse bias by attempting to increase the survey response rate. This step is usually accomplished by pursuing the cases, among nonrespondents, believed to be most likely to be interviewed. However, this approach may not be successful in reducing nonresponse bias even if higher response rates are achieved–in fact, nonresponse bias could even be increased by adding more cases that are similar to those that have already responded (Merkle and Edelman 2009). If low propensity (to respond) cases are brought into the response pool, we anticipate that this will increase the weighted response rate and result in less biased survey estimates. This is the hypothesis we intend to test with this experiment.

Several student nonresponse bias analyses were conducted using data from B&B:08/096. These analyses compared all B&B:08/09 nonrespondents with respondents, double nonrespondents with B&B:08/09 respondents, and double nonrespondents with B&B:08/09 nonrespondents. In each case bias did exist. The approaches outlined below seek to assess and alleviate the impact of this bias. Additional analyses presented later in this section provide evidence that low propensity cases did have an impact on the amount of bias observed in the B&B:08/09 data, suggesting that the proposed approach may be successful in minimizing nonresponse bias.

RTI is currently undertaking an initiative, modeled on the Responsive Design methodologies developed by Groves (Groves and Heeringa, 2006), to develop new approaches to improve survey outcomes that incorporate different responsive and adaptive features. Although still in the development phase, RTI has implemented several of these procedures on recent studies and have published preliminary results (Rosen, et al., 2011; Peytchev, et al., 2010). RTI’s approach aims to reduce nonresponse bias by using multiple sources of data to produce models that estimate a sample member’s response propensity prior to or during the commencement of data collection. After empirically identifying sample members with the lowest response propensities, we target those cases with interventions (e.g., a higher incentive, prompting, or the use of specially trained interviewers) in an attempt to maximize the average response propensity. In doing so, we minimize bias by targeting the cases that, based on the available data, are expected to have a low response propensity and a high likelihood of contributing to nonresponse bias.

Since B&B:08/12 is the second follow-up in a longitudinal study (and the third contact with the cohort), much is known about the sample members, including data about response patterns in the prior rounds (NPSAS:08 and B&B:08/09). We anticipate that data from the two previous waves will be useful in predicting the likelihood that sample members will respond to B&B:08/12 and that these data will provide information about the types of sample members most likely to contribute to nonresponse bias.

Methodology. The first objective of the response propensity approach is to use information that is known prior to data collection (e.g., frame data, paradata and indicators of previous response behavior) to develop a predictive model of a given sample member’s propensity to respond. The methodology proceeds as follows:

Step 1: Identify variables which predict propensity to respond and estimate a case’s propensity to respond to an interview.

Step 2: Conduct an experiment that will assist in determining incentive amounts to offer to each propensity group in the full-scale study.

Step 3: Evaluate the predictive ability of the model and whether bias would be significantly impacted if the proposed incentive amounts were offered to the two propensity groups in the full-scale study.

Step 1. Identify variables that predict response propensity and estimate predicted response propensities

As this is the second follow-up of the class of 2008 cohort, much data are available with which to model response propensity. To build a response propensity model for this sample, we estimated logistic regression coefficients using data from the 2008 base-year to predict response in first follow-up (2009). Model building began with a review of potential candidate variables available in the NPSAS:08 data. Based on observations from previous studies at RTI and in the literature (e.g., Fitzgerald, Gottschalk, & Moffitt, 1998; Cominole, Wheeless, Dudley, Franklin, & Wine, 2007; Cataldi et al., 2010; Peress, 2010), the original list of variables was shortened to include only those variables most likely to be key predictors of response, with an emphasis on variables that were likely related to bias. Figure 1 presents the shortened list of candidate variables.

After this initial round of variable selection, zero-order models predicting response in B&B:08/09 were estimated for each of the variables listed in Figure 1. Variables that were significant or near significant at the p < .05 level in the zero-order models were then entered simultaneously into a logistic regression model predicting response to B&B:08/09. Variables that were significant in the full model are denoted by an asterisk in Figure 1. Variables that were not significant in the full model were dropped from the model. The resulting model produced odds ratios ranging from .982 to 3.71 with an R-squared value of .11.

Figure 1. Candidate Variables for Response Propensity Model Building

Student Characteristics

  • Age

Data from the base year study

(NPSAS:08)

  • Interview response status indicator (responded/did not respond) *

  • Responded during early completion period indicator

  • Responded before prompting started indicator *

  • Case received a prompting letter indicator

  • Ever refused indicator

  • Call count *

  • Located for NPSAS:08 indicator

  • NCOA match indicator

  • ACCURINT match indicator

  • NSLDS match indicator *

  • Federal aid amount received

  • CPS match indicator

  • TELEMATCH match indicator *

  • Institution control

  • Parents’ education

Contact data available at the start of the first follow-up (B&B:08/09)

  • Student address on file indicator

  • Parent address on file indicator

  • Other” address on file indicator *

  • Email address on file indicator

  • Student phone number on file indicator

  • Parent phone number on file indicator

  • Other” phone number on file indicator


Response propensity scores were then calculated for the B&B:08/12 field test sample using the model developed in the previous step.



Figure 2 shows the cumulative distribution function plot that was used to determine the appropriate cut point for the proposed high and low propensity groups for the predicted B&B:08/12 propensity scores. The horizontal line highlights a jump in propensity scores that, when used to demarcate high and low respondents, classifies approximately 39% of the sample as “high propensity”. Approximately 61% of the sample, those with a predicted propensity score below .885, is classified as low propensity.

Figure 2. Predicted Response Propensity Scores – Cumulative Density Function (cdf)

The low propensity group contains those sample cases that we predict will be less likely to complete and most likely to introduce nonresponse bias if they do not respond to the second follow-up. As part of our analyses, we will examine cases around the propensity cut-point to be sure the placements of those cases are not influencing results too heavily.

Table 14 presents the distribution of cases in the high and low propensity groups, by response status to the previous waves of the study. Response status to the first follow-up (B&B:08/09) is the strongest predictor of response propensity – all B&B:08/09 nonrespondents fall into the low propensity group (both “double nonrespondents” and NPSAS respondents who were nonrespondents in B&B:08/09). However, the propensity model should be useful in targeting low propensity cases among B&B:08/09 respondents. Among sample members who responded to both of the prior interviews (“double respondents,”) about 60 percent are classified as high propensity. Thus, 40 percent of double respondents are predicted to have a low response propensity. Likewise, about 72 percent of B&B:08/09 respondents who were nonrespondents to NPSAS:08 are predicted to have a low response propensity. With the proposed experiment, we hope to find evidence that identifying the low propensity cases from the set of B&B:08/09 respondents will enable us to implement a targeted intervention to minimize nonresponse among the identified sample members.

Table 15 displays the mean, minimum, and maximum values for propensity scores by prior response status. As expected, the double respondents have the highest average response propensity (0.89) followed by B&B:08/09 respondents who were NPSAS nonrespondents (0.85). Response propensity for both groups of B&B:08/09 nonrespondents is just over 0.50.

Table 14 Distribution of response propensity scores for the B&B:08/12 field-test sample, by response status for NPSAS:08 and B&B:08/09

NPSAS:08 field-test interview status

B&B:08/09 field-test interview status

 

Percent

Count

Total




1,588






Respondent

Respondent

Total


936



High

60.1

563



Low

39.9

373






Respondent

Nonrespondent

Total


216



High

0.0

0



Low

100.0

216






Nonrespondent

Respondent

Total


217



High

27.6

60



Low

72.4

157






Nonrespondent

Nonrespondent

Total


219



High

0.0

0

 

 

Low

100.0

219

NOTE: B&B:08/09 = 2008/09 Baccalaureate and Beyond Longitudinal Study; B&B:08/12 = 2008/12 Baccalaureate and Beyond Longitudinal Study; NPSAS:08 = 2007–08 National Postsecondary Student Aid Study.

Table 15. Descriptive summary of response propensity scores for the B&B:08/12 field-test sample, by response status for NPSAS:08 and B&B:08/09



Propensity Score

NPSAS:08 field-test interview status

B&B:08/09 field-test interview status

Count

Mean

Min.

Max

Total


1,588

0.782

0.357

0.965







Respondent

Respondent

936

0.889

0.669

0.965

Respondent

Nonrespondent

216

0.525

0.357

0.705

Nonrespondent

Respondent

217

0.846

0.705

0.965

Nonrespondent

Nonrespondent

219

0.512

0.365

0.708

NOTE: B&B:08/09 = 2008/09 Baccalaureate and Beyond Longitudinal Study; B&B:08/12 = 2008/12 Baccalaureate and Beyond Longitudinal Study; NPSAS:08 = 2007–08 National Postsecondary Student Aid Study.

Step 2. Conduct an experiment to determine the impact of incentives on bias

We considered alternative experimental designs to address concerns about lack of power (see the section on detectable differences below). However, the alternative designs actually increased the detectable differences. If the sample cases are divided in two groups, with half of the cases being low propensity and the other half high propensity, the detectable difference increases by 0.4 percent. If we divide the sample into thirds with two-thirds of the cases being low propensity and the other third high propensity the detectable difference decreases by 0.2 percent. Another alternative considered was to categorize all sample members who were nonrespondents to both NPSAS:08 (base year) and B&B:08/09 (first follow-up) as low propensity and exclude them from the model. Doing this and then splitting the remaining cases in half to form the low and high propensity groups increases the detectable difference by 0.8 percent. We also considered categorizing all sample members who were nonrespondents to B&B:08/09 as low propensity and excluding them from the model. Doing this and then splitting the remaining cases in half to form the low and high propensity groups increases the detectable difference by 1.2 percent.

Incentive amounts offered will vary across treatment and control groups and by predicted propensity level. The following sections describe the history of incentive offers for the B&B:08 cohort, and then present our proposed incentive plan for the B&B:08/12 study.

In NPSAS:08 (the base year study for the B&B:08 cohort), all sample members were offered a $30 check upon interview completion7. In the first follow-up study (B&B:08/09), sample members were offered a $5 prepaid cash incentive along with the promise of a check upon interview completion. The check amount was dependent upon base-year response status — base-year respondents (1,150 members of the B&B:08/12 field-test sample) were offered $30, and base-year nonrespondents (440 members of the B&B:08/12 field-test sample) were offered $50 (the original $30 plus a $20 differential for prior-round nonrespondents)8. Hence, the total incentive amount was $35 for base-year respondents and $55 for base-year nonrespondents. However, only about 49% of base-year nonrespondents responded to the $55 incentive offered in the first follow-up, suggesting that a larger incentive offer may be warranted.

Our proposed incentive plan for B&B:08/12 will focus attention on cases with low predicted propensity to respond. A comparison of treatment and control groups will allow examination of the effectiveness of increasing incentive amounts while taking into consideration the incentives offered in prior rounds. We do not know if the sample members remember if they were offered or received an incentive previously, and if they do remember, we do not know if they remember the dollar amount. Another question is if the sample members are expecting a higher incentive amount this time given the increase in cost of living or due to their own economic or employment status. These issues all have to be considered in evaluating the results of this experiment.

It is possible that sample members with high predicted response propensity would be willing to accept a lower incentive amount for B&B:08/12 than they received previously. To test this, the high propensity treatment group will receive $15 less than they received in the previous round and the control group will be offered the same amount they were offered in the previous round.

Within both propensity levels, cases will be randomly assigned into control and treatment groups. The control group will be offered the same incentive that they were offered in the prior round, and incentive amounts offered for the treatment group will vary by response propensity relative to the incentive offered in the last round. Given that about 220 people who were offered $35 in the first follow-up did not respond and another 220 who were offered $55 in the first follow-up did not respond, larger incentive amounts may be required to incentivize these individuals. The incentive amount for the low propensity treatment groups will be based on the amount offered previously, and will be $15 more than they were offered in the prior round. For the high propensity treatment group the incentive will be $15 less than they were offered in the prior round.

The maximum amount offered under this design would be $70, and this would be offered to about 190 people. Table 16 shows the estimated number of eligible sample members in each propensity level and incentive level for the field-test sample.

Table 16. Field-test – estimated number of eligible sample members, by propensity level and incentive amount


Response propensity group


Incentive amount

High

Low

Total eligible sample

610

950




Control group (incentive amount is the same as they were offered in previous round)



Total

305

475

$35

275

290

$55

30

185

Treatment group (incentive amount is relative to the amount they were offered in previous round -- $15 lower for high propensity and $15 higher for low propensity)



Total

305

475

$20

275

0

$40

30

0

$50

0

290

$70

0

185

Note that some of the totals are rounded.

1 The field test sample size is 1,588. Of those, 1,560 are expected to be eligible.

The incentive approach will be targeted in the full-scale based on field-test results and on what is the best use of available resources. For example, if an increased incentive is found in the field-test to reduce nonresponse bias for the low propensity group, we may decide to only use such an incentive for a subset of the low propensity cases most likely to contribute to nonresponse bias. The incentive could also be targeted to the low propensity cases with the largest weights. The incentive will be used as a tool in our toolkit rather than a blanket approach to reduce nonresponse bias. The percentage of the sample in each propensity group for the full-scale study will be determined after the field-test.

There is a concern with increasing incentives for sample members who have been difficult to locate in prior rounds. For B&B:08/12, while some of the sample members have not been located previously, most have been located. Although two variables related to locating information are significant predictors of response in B&B:08/09, the strongest predictors are behavioral measures, suggesting that unlocatable cases are not the key source of nonresponse.

It should be noted that incentives are not the only way to encourage response and reduce nonresponse bias. We are proposing the use of incentives as part of an overall data collection design that includes multiple strategies to encourage response. The use of an abbreviated interview as an additional “treatment” for low propensity cases who are nonrespondents during the early response period was also considered. However, such a test would require that the propensity experiment be based only on early response period respondents. Such an experiment will not be conducted, but instead, as is frequently done for the NCES postsecondary sample surveys, an abbreviated interview may be offered to sample members who are still nonrespondents in the latest phase of data collection. The costs and benefits of offering the abbreviated interview will be evaluated to help inform whether the abbreviated interview should be offered earlier for low propensity cases in the full-scale study.

B&B:08/12 will make use of field interviewers to conduct interviews either in person or by phone locally. We considered expanding the use of field interviewers as an experiment for low propensity cases. However, in the field-test, the small sample is not conducive to a large amount of field interviewing given that the sample is spread across the country with few clusters of significant size. Experimenting with field interviewers for low propensity cases would likely be cost prohibitive.

Step 3. Evaluating the Results

To assess results of the field-test experiment; we will attempt to answer the following questions:

  1. Can we predict the likelihood of responding to the B&B:08/12 survey by using information on the cohort from earlier rounds of data collection?

  2. Can we improve response among cases with low predicted response propensity with the use of increased incentive amounts above what was offered in the past study? Likewise, can we maintain response rates among high propensity cases with a lower incentive offer?

  3. Are we able to reduce nonresponse bias by targeting and converting cases with low response propensity?

Can we predict the likelihood of responding to the B&B:08/12 survey by using information on the cohort from earlier rounds of data collection?

We will examine response rates for each propensity group to determine how well our assigned response propensities actually predicted sample members’ response behavior. Response rates will also be examined to ensure that the overall response rate for the low propensity experimental group is equal to or better than the control group response rate. While the goal of this approach is to minimize bias (and not necessarily to increase response rates), we want to be sure that there is no negative impact to response rates as a result of the response propensity approach. Response rates will also be examined to determine if the overall response rate for the high propensity experimental group is equal to the control group response rate. Additionally, we will evaluate cases near the thresholds of each propensity group to determine how those cases are similar or different to those nearer the midpoints of each group.

Can we improve response among cases with low predicted response propensity with the use of increased incentive amounts above what was offered in the past study? Likewise, can we maintain response rates among high propensity cases with a lower incentive offer?

We will work with OMB to determine the optimal incentive amount for full-scale data collection. The incentive amounts, nonresponse bias, and response rates within propensity group will also be examined qualitatively to help inform plans for the full-scale implementation.

Are we able to reduce nonresponse bias by targeting and converting cases with low response propensity?

The purpose of the proposed design is to allow us to test empirically whether nonresponse bias can be reduced by identifying and targeting cases with predicted low response propensity. While the B&B:08/09 study did not include a response propensity design, it can inform whether bias was reduced by including low propensity cases. Thus, an additional nonresponse bias analysis was conducted using data from B&B:08/09 to understand how nonresponse bias was affected by lower propensity cases. First, we calculated response propensity scores (as described above) for the B&B:08/09 full-scale sample. Once response propensity scores had been calculated, the sample was divided into low, medium, and high propensity groups9. Table 17 presents the range of propensity scores and the response rate for each propensity level.


Table 17 -- Summary of Response Propensity Distribution for the B&B:08/09 full-scale sample

Response Propensity Level

Range of Propensity Scores

Response Rate

Overall

.32—.96

87.9

Low

.32—.86

79.0

Medium

.86—.93

90.7

High

.93—.96

94.4

As part of methodological analyses for the B&B:08/09 study, we conducted a student-level nonresponse bias analysis. Nonresponse bias was estimated for variables known for most respondents and nonrespondents. Despite the high response rate to B&B:08/09, our analysis suggests that bias does exist between survey respondents and nonrespondents. For the variables listed below, the nonresponse bias was estimated and tested to determine if the bias is significant at the p<.05 level:

  • institution type;

  • region;

  • institution enrollment from IPEDS file (categorical);

  • Pell grant receipt (yes/no);

  • Pell Grant amount (categorical);

  • Stafford Loan receipt (yes/no);

  • Stafford Loan amount (categorical);

  • Parent Loan for Undergraduate Students (PLUS);

  • federal aid receipt (yes/no);

  • institutional aid receipt (yes/no);

  • state aid receipt (yes/no); and

  • any aid receipt (yes/no).

Table 18 presents the results of the original nonresponse bias analysis. To evaluate the effect of the propensity model on bias reduction, we evaluated how much bias would exist if low and medium propensity cases had been nonrespondents in B&B:08/09. We reran the bias analysis two additional times to evaluate bias reduction. First, we treated all low propensity cases as nonrespondents (table 19), and second, we treated all low and medium cases as nonrespondents (table 20.)10 Then, we compared the level of bias with and without the low propensity cases to the bias analysis that compared all respondents and nonrespondents.



Table 18. Student nonresponse bias before nonresponse adjustment and after weight adjustments for selected variables: 2009

Variable

Before nonresponse adjustment


Respondent weighted mean

Non-respondent weighted mean

Estimated

bias

Relative

bias

Institution Type





Public Schools

63.54

60.53

0.65  

1.04

Private Nonprofit Schools

32.50

33.24

-0.16  

-0.49

For Profit Schools

3.96

6.22

-0.49  

-11.06

Bureau of Economic Analysis Code (Office of Business Economics [OBE]) Region1





New England

6.75

7.74

-0.22  

-3.09

Mid East

16.69

20.38

-0.8  

-4.58

Great Lakes

16.41

13.60

0.61  

3.86

Plains

8.26

9.01

-0.16  

-1.93

Southeast

24.68

24.27

0.09  

0.36

Southwest

9.25

9.94

-0.15  

-1.58

Rocky Mountains

4.12

3.14

0.21  

5.47

Far West

12.36

10.83

0.33  

2.77

Outlying areas

1.48

1.11

0.08  

5.83

Institution total enrollment3





<=4,743

20.82

21.32

-0.11  

-0.52

>4,743 , <=13,042

21.45

20.18

0.28  

1.31

>13,042, <=27,210

26.45

28.77

-0.5  

-1.87

>27,210

31.28

29.73

0.34  

1.09

Pell Grant status





Received

25.81

22.64

0.69  

2.74

Did not receive

74.19

77.36

-0.69  

-0.92

Total Pell amount received3





<=$1,580

27.16

27.37

-0.05  

-0.17

>1,580, <=2,695

25.07

29.33

-0.93  

-3.56

>2,695, <=4,310

22.24

17.34

1.06  

5.02

>4,310

25.53

25.96

-0.09  

-0.36

Stafford Loan status





Received

50.32

43.34

1.512 

3.10

Did not receive

49.68

56.66

-1.512 

-2.96

Total Stafford amount received3





<=$4,410

23.09

26.57

-0.76  

-3.17

>4,410, <=5,500

48.86

39.25

2.092 

4.46

>5,500, <=6,500

2.01

3.49

-0.32  

-13.72

>6,500

26.04

30.70

-1.01  

-3.74

Total PLUS amount received3







<=$5,000

22.21

21.70

0.11  

0.50


>5,000, <=9,396

23.54

26.70

-0.69  

-2.84


>9,396, <=14,000

25.97

32.52

-1.42  

-5.19


>14,000

28.28

19.08

2.00  

7.60


Federal Aid Status






Received

58.99

49.51

2.062 

3.62


Did not receive

41.01

50.49

-2.062 

-4.78


Institutional Aid Status






Received

42.45

29.02

2.922 

7.38


Did not receive

57.55

70.98

-2.922 

-4.82


State Aid Status






Received

29.52

19.16

2.252 

8.25


Did not receive

70.48

80.84

-2.252 

-3.09


Any Aid Status






Received

77.93

63.17

3.212 

4.29


Did not receive

22.07

36.83

-3.212 

-12.68


1 New England = Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont; Mid East = Delaware, District of Columbia, Maryland, New Jersey, New York, Pennsylvania; Great Lakes = Illinois, Indiana, Michigan, Ohio, Wisconsin; Plains = Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, South Dakota;

Southeast = Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, West Virginia; Southwest = Arizona, New Mexico, Oklahoma, Texas; Rocky Mountains = Colorado, Idaho, Montana, Utah, Wyoming; Far West = Alaska, California, Hawaii, Nevada, Oregon, Washington;

Outlying Areas = American Samoa, Federated States of Micronesia, Guam, Marshall Islands, Northern Mariana Islands, Puerto Rico, Palau, Virgin Islands.

2 Bias is significant at the 0.05 level.

3 Enrollment, Pell grant amount, Plus amount, and Stafford loan amount categories were defined by quartiles.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2008 National Postsecondary Student Aid Study (NPSAS:08).


Table 19. Student nonresponse bias before nonresponse adjustment for selected variables for – low-propensity cases treated as nonrespondents: 2009

Variable

Before nonresponse adjustment

Respondent weighted mean

Non-respondent weighted mean

Estimated bias

Relative bias

Institution Type





Public Schools

62.79

62.55

0.12  

0.20

Private Nonprofit Schools

33.98

31.74

1.17  

3.57

For Profit Schools

3.23

5.71

-1.32 

-28.68

Bureau of Economic Analysis Code (Office of Business Economics [OBE]) Region1





New England

7.07

7.01

0.03  

0.43

Mid East

17.01

18.18

-0.61  

-3.45

Great Lakes

17.29

14.65

1.382 

8.65

Plains

9.00

7.81

0.62  

7.44

Southeast

24.04

24.64

-0.31  

-1.29

Southwest

8.18

10.77

-1.35  

-14.18

Rocky Mountains

4.62

3.24

0.722 

18.42

Far West

11.68

11.97

-0.15  

-1.29

Outlying areas

1.10

1.72

-0.32  

-22.50

Institution total enrollment3





<=4,760

22.00

20.27

0.9  

4.28

>4,760 , <=13,042

21.25

21.09

0.08  

0.40

>13,042, <=27,210

25.79

27.65

-0.97  

-3.62

>27,210

30.96

30.99

-0.02  

-0.06

Pell Grant status





Received

26.17

23.64

1.32  

5.32

Did not receive

73.83

76.36

-1.32  

-1.76

Total Pell amount received3





<=$1,580

26.47

28.37

-0.99  

-3.59

>1,580, <=2,695

25.16

26.74

-0.82  

-3.16

>2,695, <=4,310

22.92

18.72

2.19  

10.58

>4,310

25.44

26.18

-0.39  

-1.49

Stafford Loan status





Received

54.05

43.17

5.672 

11.72

Did not receive

45.95

56.83

-5.672 

-10.99

Total Stafford amount received3





<=$4,400

22.50

25.36

-1.49  

-6.23

>4,400, <=5,500

52.49

41.64

5.662 

12.08

>5,500, <=6,417

1.87

2.42

-0.29  

-13.41

>6,417

23.14

30.57

-3.882 

-14.34

Total PLUS amount received3





<=$5,000

20.90

22.30

-0.73  

-3.38

>5,000, <=9,396

23.11

26.73

-1.89  

-7.55

>9,396, <=14,000

27.09

27.58

-0.25  

-0.93


>14,000

28.90

23.39

2.87  

11.03


Federal Aid Status






Received

62.25

50.98

5.882 

10.43


Did not receive

37.75

49.02

-5.882 

-13.48


Institutional Aid Status






Received

49.63

30.84

9.82 

24.62


Did not receive

50.37

69.16

-9.82 

-16.29


State Aid Status






Received

33.71

21.60

6.322 

23.06


Did not receive

66.29

78.40

-6.322 

-8.70


Any Aid Status






Received

82.67

66.86

8.252 

11.08


Did not receive

17.33

33.14

-8.252 

-32.25


1 New England = Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont; Mid East = Delaware, District of Columbia, Maryland, New Jersey, New York, Pennsylvania; Great Lakes = Illinois, Indiana, Michigan, Ohio, Wisconsin; Plains = Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, South Dakota;

Southeast = Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, West Virginia; Southwest = Arizona, New Mexico, Oklahoma, Texas; Rocky Mountains = Colorado, Idaho, Montana, Utah, Wyoming; Far West = Alaska, California, Hawaii, Nevada, Oregon, Washington;

Outlying Areas = American Samoa, Federated States of Micronesia, Guam, Marshall Islands, Northern Mariana Islands, Puerto Rico, Palau, Virgin Islands.

2 Bias is significant at the 0.05 level.

3 Enrollment, Pell grant amount, Plus amount, and Stafford loan amount categories were defined by quartiles.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2008 National Postsecondary Student Aid Study (NPSAS:08).





Table 20 Student nonresponse bias before nonresponse adjustment for selected –– medium and low-propensity cases treated as nonrespondents: 2009

Variable

Before nonresponse adjustment

Respondent weighted mean

Non-respondent weighted mean

Estimated

bias

Relative

bias


Institution Type






Public Schools

60.60

63.25

-2.06  

-3.29


Private Nonprofit Schools

37.08

31.61

4.272 

13.02


For Profit Schools

2.31

5.14

-2.212

-48.89


Bureau of Economic Analysis Code (Office of Business Economics [OBE]) Region1






New England

8.96

6.50

1.92  

27.33


Mid East

16.97

17.81

-0.65  

-3.69


Great Lakes

17.39

15.50

1.47  

9.27


Plains

9.66

8.02

1.29  

15.34


Southeast

24.26

24.38

-0.1  

-0.40


Southwest

7.64

10.07

-1.9  

-19.89


Rocky Mountains

4.64

3.70

0.73  

18.81


Far West

10.01

12.35

-1.82  

-15.41


Outlying areas

0.47

1.69

-0.952 

-66.74


Institution total enrollment3






<=4,760

24.92

20.03

3.822 

18.08


>4,760 , <=13,042

21.26

21.14

0.09  

0.43


>13,042, <=27,210

24.75

27.32

-2.01  

-7.51


>27,210

29.08

31.51

-1.9  

-6.12


Pell Grant status






Received

25.09

24.79

0.24  

0.96


Did not receive

74.91

75.21

-0.24  

-0.32


Total Pell amount received3






<=$1,580

24.29

28.30

-3.13  

-11.40


>1,580, <=2,695

26.42

25.81

0.48  

1.83


>2,695, <=4,310

23.41

20.10

2.58  

12.38


>4,310

25.88

25.79

0.07  

0.28


Stafford Loan status






Received

58.35

45.56

9.98a 

20.63


Did not receive

41.65

54.44

-9.98a 

-19.33


Total Stafford amount received3






<=$4,400

22.46

24.33

-1.46  

-6.10


>4,400, <=5,500

55.39

44.57

8.442 

17.97


>5,500, <=6,417

1.25

2.44

-0.932 

-42.75


>6,417

20.91

28.65

-6.052 

-22.44


Total PLUS amount received3






<=$5,000

20.36

22.02

-1.3  

-6.00


>5,000, <=9,396

19.79

26.77

-5.45  

-21.58


>9,396, <=14,000

23.35

28.92

-4.34  

-15.67


>14,000

36.50

22.29

11.092 

43.62


Federal Aid Status






Received

65.24

53.87

8.872 

15.73


Did not receive

34.76

46.13

-8.872 

-20.32


Institutional Aid Status






Received

54.23

35.78

14.42 

36.15


Did not receive

45.77

64.22

-14.42 

-23.93


State Aid Status






Received

36.32

24.88

8.932 

32.60


Did not receive

63.68

75.12

-8.932 

-12.30


Any Aid Status






Received

85.30

71.37

10.872 

14.61


Did not receive

14.70

28.63

-10.872 

-42.51


1 New England = Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont; Mid East = Delaware, District of Columbia, Maryland, New Jersey, New York, Pennsylvania; Great Lakes = Illinois, Indiana, Michigan, Ohio, Wisconsin; Plains = Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, South Dakota;

Southeast = Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, West Virginia; Southwest = Arizona, New Mexico, Oklahoma, Texas; Rocky Mountains = Colorado, Idaho, Montana, Utah, Wyoming; Far West = Alaska, California, Hawaii, Nevada, Oregon, Washington;

Outlying Areas = American Samoa, Federated States of Micronesia, Guam, Marshall Islands, Northern Mariana Islands, Puerto Rico, Palau, Virgin Islands.

2 Bias is significant at the 0.05 level.

3 Enrollment, Pell grant amount, Plus amount and Stafford loan amount categories were defined by quartiles.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2008 National Postsecondary Student Aid Study (NPSAS:08).


As shown in Table 21, the amount of bias changes with the inclusion of low propensity cases. When the comparison was made between actual respondents and nonrespondents, 27.5% of variable categories were significantly biased. However, the amount of significant bias increased to 37.5% when low propensity cases were reclassified as nonrespondents. Bias increased to 45.0% when both low and medium propensity cases had been reclassified as nonrespondents. In effect, this comparison demonstrates that much of the nonresponse bias is attributable to the low-propensity cases.

Table 21. Summary of student interview nonresponse bias analysis, overall and with low and medium propensity cases treated as nonrespondents

Nonresponse bias statistics

Mean estimated relative bias

Median estimated relative bias

Percent of variable categories significantly biased

Actual respondents and nonrespondents

3.90

3.14

27.50

Low propensity cases treated as nonrespondents

9.40

8.10

37.50

Low and medium propensity cases treated as nonrespondents

17.89

15.54

45.00

Given that the purpose of the propensity experiment is to reduce bias by obtaining more interviews from low and medium propensity cases, this evaluation demonstrates that our proposed approach should curtail nonresponse bias by identifying cases with lower response propensity for targeted intervention during data collection.

Evaluation of nonresponse bias in the B&B:08/12 field test. From the beginning of data collection, we will monitor variables of particular analytic interest and evaluate to what extent the overall estimates may change with the completion of additional low propensity cases. For B&B:08/12, we will pay particular attention to items related to postbaccalaureate enrollment, employment status, and teaching. In this way, we can assess whether the strategy to minimize potential nonresponse bias is working as expected.

As part of our field test analyses, nonresponse bias analysis will be conducted to determine whether bias has been reduced with the response propensity approach. The respondents and nonrespondents in the low propensity control group will be evaluated to estimate the bias due to nonresponse, and, likewise, the respondents and nonrespondents in the low propensity treatment group will be evaluated to estimate the bias due to nonresponse. Then these two sets of bias estimates will be compared and tested to determine if there is more or less bias in the treatment group than in the control group. A significant reduction in bias among the treatment group would suggest that, for the full-scale study, a higher incentive amount may be warranted. Unlike the NPSAS:12 field-test that has a nationally representative sample however, the B&B:08/12 field-test sample is not nationally representative. Thus, the results will need to be interpreted and generalized with caution.

Level of Effort by Propensity Level

While incentives are critical to successful data collection efforts, RTI is mindful of the fact that any strategy to improve survey response should be beneficial for both overall cost and data quality. The ultimate goal of the response propensity approach is not to increase response rates overall, but rather to improve response among more difficult cases as a way to minimize nonresponse bias. As such, an incentive (or any other intervention aimed at increasing response among difficult cases) should be cost-effective in that it reduces the level of effort required to obtain completed interviews, especially for targeted sub-groups.

With this in mind, we examined the level of effort expended in B&B:08/09 across response propensity levels for key metrics as a way to approximate the unit-level cost for completed interviews. The results presented in Table 22 indicate that the level of effort required, and thus the cost associated with obtaining a completed interview, is greater for cases with lower response propensity.

With the proposed experimental design for the B&B:08/12 field test, we will be able to compare the level of effort required for each propensity level, and by treatment and control groups within propensity levels. The field test will allow us to determine not only whether a change in incentive amount improves response among low propensity cases and thus reduces the associated nonresponse bias, it will also allow us to determine whether a change in incentive amount impacts the level of effort required.

Table 22. Level of Effort Measures, by Response Propensity, B&B:08/09 Full-scale study

Response Propensity Level

Response Rate

% Early Completions

Average Call Count

% that required tracing

Overall

87.9

59.0

9.0

7.1

Low Propensity

79.0

41.0

13.8

11.7

Medium Propensity

90.7

63.4

7.6

6.1

High Propensity

94.4

73.9

5.3

3.0

  1. Experimental Design

The video experiment and the incentive experiments could be confounded, so the treatment and control groups for the video experiment will be randomly assigned both within the treatment and control groups for the first video experiment and within the incentive experiment and propensity groups. See figure 3 for a diagram of the experimental groups.

  1. Null Hypotheses

  1. Response rates will not be lower among the low propensity treatment groups than in the low propensity control groups.

  2. Response rates will not be lower among the high propensity treatment group (that receive $ 15 less than the control group) than in the high propensity control group.

  3. There will be no difference in unit nonresponse bias between the low propensity treatment and control groups.

  4. There will be no difference in response rates among the sample members who receive the link for the interview invitation video and who do not receive the link to the video, not conditional on whether or not they received the video during the address update.

  5. There will be no difference in response rates among the cases who receive the link for the interview invitation video and who do not receive the link to the video, conditional on whether or not they received the video during the address update.

  1. Detectable Differences

As part of the planning process for developing the field-test experiment design, the differences necessary to detect statistically significant differences have been estimated. That is, how large of a difference between the control and treatment groups is necessary to determine whether the response rates are different in hypotheses 1, 2, 4, and 5, or how large of a difference in nonresponse bias estimates between the control and treatment groups is necessary to determine whether the nonresponse bias estimates are different in hypothesis 3.

Figure 3 – Assignment of sample cases to experimental conditions

Table 23 shows the expected sample sizes and statistically significant detectable difference for the five hypotheses. Several assumptions were made regarding response and participation rates and sample sizes. In general, the closer a rate is to 50 percent (either less than or greater than), the larger the detectable difference. Likewise, smaller sample sizes require larger detectable differences.

Assumptions:

  1. Detectable differences with 95 percent confidence were calculated as follows:

    1. Hypotheses 3, 4, and 5 assume a two-tailed test.

    2. Hypotheses 1 and 2 assume a one-tailed test.

  2. The sample will be equally distributed across experimental cells.

  3. Only eligible cases will be included in the analyses of hypotheses 1, 2, and 3.

  4. All ineligible cases will be included in the analyses of hypotheses 4 and 5 because ineligibility will be determined after the interview begins.

  5. The response rate for the control group for hypothesis 1 will be 54 percent.

  6. The response rate for the control group for hypothesis 2 will be 94 percent.

  7. The participation rate for the control group for hypotheses 4 and 5 will be 58 percent.

  8. Unit nonresponse bias for the control group for hypothesis 3 will be ten percent.11

  9. The statistical tests will have 80 percent power with an alpha of 0.05.



In preparation for the full-scale data collection, we will notify OMB of the experiment results and recommend incentive and informational (read Lego) video plans that are in-line with those findings.

Table 23. Detectable differences for field-test experiment hypotheses


Hypothesis

Control group

Treatment group

Detectable difference with 95 percent confidence

Definition

Sample size

Definition

Sample size

1

No additional incentive for the low propensity cases

475

Additional incentive for the low propensity cases

475

8.0

2

No additional incentive for the high propensity cases

305

Lower incentive for the high propensity cases

305

5.7

3

No additional incentive for the low propensity cases

475

Additional incentive for the low propensity cases

475

4.8

4

No Lego video for survey

794

Lego video for survey

794

6.9

5

No Lego video for address update or survey

397

No Lego video for address update, Lego video for survey

397

9.7


Lego video for address update, no Lego video for survey

397

Lego video for both address update and survey

397

9.7


No Lego video for address update or survey

397

Lego video for address update, no Lego video for survey

397

9.7


No Lego video for address update, Lego video for survey

397

Lego video for both address update and survey

397

9.7

    1. Reviewing Statisticians and Individuals Responsible for Designing and Conducting the Study

Names of individuals consulted on statistical aspects of study design, along with their affiliation and telephone numbers, are provided below.

Name

Affiliation

Telephone

Dr. Susan Choy

MPR

(510) 849-4942

Dr. Robin Henke

MPR

(510) 849-4942

Dr. Jennie Woo

MPR

(510) 849-4942

Dr. John Riccobono

RTI

(919) 541-7006

Dr. Jennifer Wine

RTI

(919) 541-6870

Dr. James Chromy

RTI

(919) 541-7019

Mr. Peter Siegel

RTI

(919) 541-6348

In addition to these statisticians and survey design experts, the following statisticians at NCES have also reviewed and approved the statistical aspects of the study: Dr. Tracy Hunt-White, Ted Socha, Linda Zimbler, Matt Soldner, Dr. Sean Simone, and Dr. Tom Weko.

    1. Other Contractors’ Staff Responsible for Conducting the Study

The study is being conducted by the Postsecondary Longitudinal and Sample Studies (PLSS) Program within the PACE Division of NCES in ED. NCES’s prime contractor is RTI. RTI is being assisted through subcontracted activities by MPR Associates. Principal professional staff of the contractors, not listed above, who are assigned to the study are provided below:

Name

Affiliation

Telephone

Ms. Vicky Dingler

MPR

(510) 849-4942

Ms. Emily Forrest-Cataldi

MPR

(510) 849-4942

Ms. Stephanie Nevill

MPR

(510) 849-4942

Dr. Bryan Shepherd

RTI

(919) 316-3482

Mr. Jeff Franklin

RTI

(919) 485-2614

Mr. Joe Simpson

RTI

(919) 541-5941

Ms. Melissa Cominole

RTI

(919) 990-8456

Ms. Donna Anderson

RTI

(919) 990-8399

Mr. Mike Bryan

RTI

(919) 541-7498



  1. Overview of Analysis Topics and Survey Items

The B&B:08/12 data collection instrument is presented in Appendix G. Many of the data elements to be used in B&B:08/12 appeared in the previously approved B&B:08/09. Additional items will also be included in B&B:08/12. These items will be tested in cognitive interviews prior to field-test data collection.

References

Alt, M.N., and Henke, R.R. (2007). To Teach or Not to Teach? Teaching Experience and Preparation Among 1992–93 Bachelor’s Degree Recipients 10 Years After College (NCES 2007-163). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

Bradburn, E., and Berger, R. (2002). Beyond 9 to 5: The Diversity of Employment Among 1992–93 College Graduates in 1997 (NCES 2003–152). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Bradburn, E.M., Berger, R., Li, X., Peter, K., and Rooney, K. (2003). A Descriptive Summary of 1999–2000 Bachelor’s Degree Recipients 1 Year Later, With an Analysis of Time to Degree (NCES 2003–165). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Bradburn, E.M., Nevill, S., and Cataldi, E.F. (2006). Where Are They Now? A Description of 1992–93 Bachelor’s Degree Recipients 10 Years Later (NCES 2007–159). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Clune, M.S., Nuñez, A.-M., and Choy, S.P. (2001). Competing Choices: Men’s and Women’s Paths After Earning a Bachelor’s Degree (NCES 2001–154). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and Regression Trees (1st ed.). Chapman and Hall/CRC.

Cataldi, E., Green, C., Henke, R., Lew, T., Woo, J., Shepherd, B., & Siegel, P. (2010). 2008–09 Baccalaureate and Beyond Longitudinal Study (B&B:08/09): First Look (NCES 2011-236). U.S. Department of Education, National Center for Education Statistics.Washington, DC.

Choy, S.P. (2000). Low-Income Students: Who They Are and How They Pay for Their Education (NCES 2000-169). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

Choy, S.P., and Li, X. (2006). Dealing With Debt: 1992–93 Bachelor’s Degree Recipients 10 Years Later (NCES 2006-156). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Henke, R.R., Chen, X., and Geis, S. (2000). Progress Through the Teacher Pipeline: 1992–93 College Graduates and Elementary/Secondary School Teaching as of 1997 (NCES 2000–152). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Cominole, M., Wheeless, S., Dudley, K., Franklin, J., & Wine, J. (2007, December 11). Beginning Postsecondary Students Longitudinal Study 2004-2006 (BPS:2004/2006) Methodology Report (NCES 2008-184). U.S. Department of Education, National Center for Education Statistics. Washington, DC.

Fitzgerald, J., Gottschalk, P., & Moffitt, R. (1998). An Analysis of Sample Attrition in Panel Data: The Michigan Panel Study of Income Dynamics. The Journal of Human Resources, 33(2), 251-299. Groves, R. M., & Heeringa, S. (2006). Responsive design for household surveys: tools for actively controlling survey errors and costs. Journal of the Royal Statistical Society Series A: Statistics in Society, 169(Part 3), 439-457.

Henke, R.R., Geis, S., and Giambattista, J. (1996). Out of the Lecture Hall and Into the Classroom: 1992–93 College Graduates and Elementary/Secondary School Teaching (NCES 96–899). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Henke, R.R., Peter, K., Li, X., and Geis, S. (2005). Elementary/Secondary School Teaching Among Recent College Graduates: 1994 and 2001 (NCES 2005–161). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Henke, R.R., and Zahn, L. (2001). Attrition of New Teachers Among Recent College Graduates: Comparing Occupational Stability Among 1992–93 Graduates Who Taught and Those Who Worked in Other Occupations (NCES 2001–189). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

Horn, L.J., and Zahn, L. (2001). From Bachelor’s Degree To Work: Major Field of Study and Employment Outcomes of 1992–93 Bachelor’s Degree Recipients Who Did Not Enroll in Graduate Education by 1997 (NCES 2001–165). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

McCormick, A.C., Nuñez, A.-M., Shah, V., and Choy, S.P. (1999). Life After College: A Descriptive Summary of 1992–93 Bachelor’s Degree Recipients in 1997, With an Essay on Participation in Graduate and First-Professional Education (NCES 1999–155). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

McCormick, A., and Horn, L.J. (1996). A Descriptive Summary of 1992–93 Bachelor’s Degree Recipients: 1 Year Later, With Essay on Time to Degree (NCES 96–158). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office.

Nevill, S.C., and Chen, X. (2007). The Path Through Graduate School: A Longitudinal Examination 10 Years After Bachelor’s Degree (NCES 2007-162). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

Peress, M. (2010). Correcting for Survey Nonresponse Using Variable Response Propensity. Journal of the American Statistical Association, 105(492), 1418-1430.

Peytchev, A., S. Riley, J.A. Rosen, J.J. Murphy, and M. Lindblad. (2010). Reduction of Nonresponse Bias in Surveys through Case Prioritization. Survey Research Methods, 4(1), 21-29

Rosen, J., Murphy, J. J, A. Peytchev, Riley, S., & Lindblad, M;. (2011). The Effects of Differential Interviewer Incentives on a Field Data Collection Effort. Field Methods.


1The institution sample was later freshened from the 2005–06 Integrated Postsecondary Education Data System files and also supplemented to include a sufficient number of institutions to have state-representative undergraduate student samples in six states: California, Georgia, Illinois, Minnesota, New York, and Texas.

2 The additional institutions later selected for the state augmentation caused 20 institutions that participated in the NPSAS:08 field test also to be in the NPSAS:08 full-scale study.

[1][1] We are proposing to conduct a response propensity experiment in the field test, which is described in detail in section B.8.b. Cases for the reinterview will be randomly selected across the experimental groups for both propensity levels. We will, first, select potential reinterview cases from within a portion of the completed interviews separately for the high and low propensity cases (one potential reinterview case from every five completed high propensity interviews, and one potential reinterview case from every five completed low propensity interviews).  We will monitor counts of reinterview cases and adjust the reinterview sampling rates, as necessary, to ensure that cases are selected across the levels of response propensity.

3For more information on the ELS response rates, see the ELS:2002 Base-Year to Second Follow-up Data File Documentation (pp. 104–105, 162, 191).

4 Results of this experiment showed no significant difference in the rate of address update completions between the group that saw the video and the group that did not. Approximately 10.3% of the sample provided an address update, regardless of video condition.

5 A similar experiment has been approved for the NPSAS:12 field test.

6 Nonresponse bias analyses will be documented in the forthcoming First Look and Methodology report for the B&B:08/09 study.

7 Both NPSAS:08 and B&B:08/09 data collection plans consisted of an early response phase, a production phase, and a nonresponse conversion phase. Incentives were not offered for interview completions during the production phase. For more detail about the phases of data collection, see http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=200801

and http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=201002.

8 An experiment conducted in the B&B:08/09 field test indicated that a $5 prepaid cash incentive was more effective than no incentive or a prepaid check incentive in encouraging response. Based on field test results, the $5 cash prepaid incentive was offered to all sample members in the full-scale study. We propose to offer the prepaid incentive to all sample members in the B&B:08/12 field test.

9 When this analysis was conducted, the proposed experiment was based on three, rather than two propensity levels. Thus, the B&B:08/09 sample was divided into three propensity levels.

10 Tables 20 and 21 are based on an earlier classification of propensity scores that used three propensity levels; however, we do not expect the conclusions regarding the relationship between propensity scoring and bias to be different with the current two level approach. These tables are included to illustrate that low propensity cases do impact bias if they do not respond.

11 Ten percent is generally considered the maximum acceptable value for unit nonresponse bias analysis.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleChapter 2
Authorelyjak
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy