OMB Questions

BB 08 Appendix H responses to omb questions.doc

Baccalaureate and Beyond Longitudinal Study, Third Followup (B&B:09)

OMB Questions

OMB: 1850-0729

Document [doc]
Download: doc | pdf


Appendix H
Responses to OMB Questions
Concerning BB:08/09



Memorandum United States Department of Education

Institute of Education Sciences

National Center for Education Statistics



TO: Edie McArthur January 18, 2007

Assistant to the Commissioner, NCES


FROM: Kristin Perry

Postsecondary Studies Division, NCES


Jennifer Wine, RTI International

SUBJECT: Responses to OMB Questions concerning 2008/09 Baccalaureate and Beyond Study (BB:08/09) (ED-05-CO- 0033)


Questions dated 12/17/2007


1. Please provide a brief conceptual distinction between the approach taken by HSLS and B&B.  Is NCES planning to continue to sponsor surveys taking both approaches or is one preferred?   Please answer in light of the discussion in Part A 1d of the B&B:08 package which mentions some weaknesses of past high school study approaches.


HSLS is a longitudinal study of a 9th grade cohort of students enrolled in algebra I in the fall of 2009. Data collection will include: a student survey and assessment; parent, teacher, counselor, and administrator surveys; and high school transcripts. To the extent possible, data collection will use web-based methodologies. Follow-up studies with the cohort are expected 2 and 8 years following high school.


B&B is a longitudinal study of students who completed a bachelor’s degree during the 2007-2008 academic year. Data collection will include: a student survey; transcripts from the institution awarding the bachelor’s degree; and extant data from sources such as the National Student Loan Data System. The student survey will be multimode (designed as a web, self-administered interview; telephone interview; and field interview). The B&B base year study will be conducted in 2008 (NPSAS:08) and one follow-up is planned for 2012.


There are advantages and limitations to both the HSLS and B&B designs, depending on the intended goals. As discussed in section A1d of the clearance package, the elementary/secondary school cohorts are age cohorts. All students sampled will be in the same grade and, therefore, of approximately the same developmental age. When students are surveyed following high school, many will be pursuing postsecondary education, but enrollment past the base year will not be a qualification for continued participation in the study; all students from the original sample will be followed.


In contrast, the postsecondary studies sample from among students enrolled during a particular academic year (2007-08 for B&B), irrespective of age. Students selected for the B&B sample will have completed requirements for a bachelor’s degree during the 2007-08 academic year. However, they can hold prior degrees or be completing the degree at a much later age than the traditional college student (21 or 22 years of age). For the Beginning Postsecondary Students (BPS) Longitudinal Study, again, age is irrelevant. The key qualification for inclusion in the BPS:04 cohort is that, at some time during the 2003-04 academic year, students were enrolled in a postsecondary program for the first time since completing high school requirements. A key difference between these postsecondary studies and HSLS is that students are selected once they have enrolled for postsecondary education whereas, with HSLS, students do not necessarily enroll beyond high school.


2. Please provide (as soon as possible but doesn't need to be before the call) a high level matrix of NCES's student collections with information such as cross sectional/longitudinal, universe, sample size, groups for whom separate estimates will be available, purpose, main topics, etc.  This big picture overview would be useful going forward.  Below is a partial example of what would be useful.


As an attachment (“Q2_Overview_NCES_Surveys.xls”), a table provides an overview of all of the NCES surveys and their relationship to one another.


3. To what extent are the graduate school questions in B&B informed by NSF/SRS's work on the National Survey of Recent College Graduates?  With whom at the NSF have NCES consulted on these studies?


NSF staff have participated and continue to serve on the technical review panels for both of NCES’ postsecondary longitudinal studies, B&B and the Beginning Postsecondary Students Longitudinal Study. NSF TRP members have included: Mary Golladay who, upon her retirement, was replaced by Joan Burelli; John Tsapogas, former NSRCG project officer who attended the last meeting of the B&B:93/03 TRP; and currently, Nancy Leach represented NSF on both the B&B and BPS panels.


RTI has worked to ensure that items asked in each of the NCES postsecondary longitudinal studies, including questions on graduate school enrollment, are consistent with other NCES postsecondary studies and, in particular, their respective base year studies. Consistency with prior NCES studies would be expected to take precedence over consistency with SESTAT to ensure that data users have the opportunity for trend analysis. However, for new items, RTI will consult the SESTAT guidelines to ensure that, to the extent possible, questions asked for the B&B:08/09 first follow-up study are consistent with those already in use by NSF.


4. What is the informed consent process in NPSAS:08 for students who may be sampled for B&B?


Appropriate language for obtaining informed consent for participation in the B&B longitudinal study is included in the initial screens of the interview for both web and telephone administration.


The attachment, “Q4_Consent_Wording.doc,” contains the text relevant to informed consent.


RTI’s IRB has reviewed and approved the attached wording.


5. Please provide a copy of the 2003 B&B incentive experiment documentation.


In brief, the B&B:93/03 field test incentive experiment compared response rates observed from two groups of sample members : (1) those offered $20 to complete a web, self-administered interview (WSAI) within the first 10 days of data collection; and (2) those offered no money to complete the web interview. The difference in response rates observed between the two groups was statistically significant – more of those offered the incentive participated during the first 10 days (12.7 percent versus 8.7 percent; (z=1.9; p<0.05).


The attachment, “Q5_BB03_FT_Exp_Results,” provides more detailed results from the B&B:93/03 field test incentive experiment, excerpted from the working paper for the study.


Questions dated 1/14/08


  1. Student Locating


    1. Please provide more information about what is planned for the Field Test in "assessing the quality, completeness, and effectiveness of various types of locating data obtained during the base year."


B&B base year locating data will include addresses provided by the institution, as well as addresses and other contact information provided by the respondent for parents, another contact, and for his/her expected address one year following NPSAS data collection. These data will be evaluated by determining the extent to which the addresses were accurate and successful in locating and interviewing the sample member.


Tables 3-7 from the BPS:04/06 field test methodology report (nces.ed.gov/pubs2006/200601.pdf) and table 15 from the NPSAS:08 field test methodology report (in preparation) provide examples of analyses conducted on locating data. These tables have been included below for your convenience. Because the BPS field test sample was smaller than needed, it was supplemented with student samples taken from lists not used during the NPSAS:04 base year data collection. Consequently, these locating results are lower than might be expected for B&B.

Table 3-7. Interview completion rates, by address update reply: 2005

Type of address update

Provided update


Located


Interviewed, given located

Number

Percent

Number

Percent

Number

Percent

Total

420

100.0


410

96.9


310

75.9










Parent mailing

70

15.5


60

95.4


60

88.7

Advance notification mailing

190

44.9


180

95.2


130

72.6

Website reply

170

39.6


170

99.4


120

74.5

NOTE: Detail may not sum to totals because of rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004/06 Beginning Postsecondary Student Longitudinal Study (BPS:04/06) Field Test.


Table 15. Batch processing record match rates, by tracing source: 2007

Method of tracing1

Number of records sent

Number of records matched

Percent matched

Total

9,390

3,990

42.5





CPS

2,950

1,920

65.3

NCOA

3,000

190

6.3

Telematch

3,000

1,790

59.6

Accurint

450

80

2.7

1These rows are not mutually exclusive. If a student could not be located, the case was sent to one or more of the tracing sources.

NOTE: Detail may not sum to totals because of rounding. All percentages are unweighted and based on the number of eligible students within the row under consideration. CPS = Central Processing System; NCOA = National Change of Address.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2008 National Postsecondary Student Aid Study (NPSAS:08) Field Test.


  1. Please provide more information about what is planned for the Field Test in "evaluating the utility of pre-CATI submission of sample members to Telematch, NCOA and CPS as a mechanism for obtaining updated locating information." 


Prior to the B&B field test, RTI will update the most recent locating information for potential B&B sample members using parent and student-reported address updates, and updates received from NCOA and other batch sources. New and old locating information will be loaded into the case management system for data collection, and any additional locating information obtained by interviewers during production interviewing will be loaded as well. These addresses will be retained until information obtained during data collection indicates that the address is no longer viable for a particular sample member. Addresses will be marked as obsolete based on mailout returns, dead end lines, information provided by parents, other contacts, and other individuals now living at an address or holding the telephone number of a sample member. A typical analysis of batch tracing results is provided below in table 15 from the BPS:04/06 full-scale methodology report (nces.ed.gov/pubs2008/2008184.pdf). Each source was evaluated for the percent of records matched given records sent.


Table 15. Batch processing record match rates, by tracing source: 2006

Tracing source

Number of
records sent

Number of
records matched

Percent
matched
1,2

Total

98,240

45,450

46.3





NCOA - Round 13

23,080

5,690

24.7

NCOA - Round 2

3,200

150

4.6

CPS - 2004–05

22,500

12,440

55.3

CPS - 2005–06

22,510

9,440

41.9

Telematch

22,960

16,460

71.7

Accurint

4,000

1,270

31.8

1 Match rate includes instances when sample member contact information was confirmed and when new information was provided.

2 Percent is based on the number of records sent for batch tracing. Because records were sent to multiple tracing sources, multiple record matches were possible.

3 The entire sample was sent to the NCOA in the first round, excluding approximately 15 cases that did not have mailing addresses.

NOTE: Detail may not sum to totals because of rounding. CPS = Central Processing System. NCOA = National Change of Address.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004/06 Beginning Postsecondary Students Longitudinal Study (BPS:04/06).



  1. For what percentage of students do you anticipate requiring the assistance of locating companies or other similar tools?  How does this compare to the NPSAS percentages?


The NPSAS:08 data collection begins with student locating information provided by institutions. Since the information is typically obtained early in the academic year, it tends to be out of date. In contrast, the longitudinal studies have obtained locating information from the student directly and, therefore, they tend to be more accurate than the NPSAS locating information. For the B&B sample, addresses will have been obtained within the 12 months prior to data collection directly from students and will be quite current.


The BPS:04 cohort provides an example of the differences in rates that can be expected between the NPSAS base year data collection and a first follow-up interview which, for BPS, was conducted 2 calendar years following the base year interview. During NPSAS:04, 26 percent of sample members were sent for intensive tracing before data collection ended. For BPS:04/06, 23 percent of cases were sent for intensive tracing. Again, because the BPS:04/06 field test required the supplemental sample, this rate was probably higher than would be expected for B&B.


  1. Experiments


    1. How were these four experiments chosen, particularly as a set?  Were there other experiments deemed a lower priority for this study? 


Four experiments have been proposed for the B&B:08/09 field test data collection: (1) design of the data collection announcement; (2) use of text messaging to announce the start of data collection; (3) use of prepaid and promised incentives to increase participation during the early response phase; and (4) use of incentives during production interviewing when interviewers are making outbound calls. These particular experiments were chosen because they expand on prior findings from the set of postsecondary studies RTI has or is conducting for NCES and contributes to the survey methodology literature. A more detailed explanation of the reasons why we chose these 4 experiments may be found in our answers to OMB’s inquiries to questions 2b-2f. In the meantime, we would like to discuss with OMB the possibility of adding two additional experiments to the B&B field test.


The first additional experiment compares participation rates between prior round nonrespondents who receive reminder (prompting) calls halfway through the early response period to participation rates among prior round nonrespondents who do not receive such calls. The same experiment was conducted as part of the BPS:04/06 field test and showed that nonrespondents who were prompted participated at a higher rate than those not prompted (21.5 percent compared to 9 percent; z = 5.57; p < .01).


Prompting Calls


This first additional experiment pertains to differences in participation between prior round nonrespondents who receive reminder (prompting) calls halfway through the early response period and those who do not receive such calls. The more important finding from the BPS prompting experiment could be beneficial for longitudinal studies in general, faced with the challenge of getting prior round nonrespondents to participate in a current data collection. BPS showed that base year nonrespondents who were prompted during the early response period were as likely to participate during that phase of interviewing as were base year respondents, negating the usual response rate differences seen between prior round respondents and nonrespondents.1 However, because a large portion of the BPS sample was from the supplemental sample, they were not “traditional longitudinal nonrespondents” who had been contacted by study staff previously and refused, or who had been difficult to locate. If the same outcome is observed when the experiment is repeated, then the results will make an important contribution to the literature for longitudinal studies.


Layout of Response Options


The second additional experiment that RTI would like to conduct manipulates the layout of selected items in the interview then compares responses for quality, completeness, and burden. When the self-administered, web survey was added as a mode for the postsecondary surveys, the advantage of interviewers being able to code responses to open-ended questions was lost. As an example, consider the question: “Which of the following types of community service or volunteer work did you perform?” When administered by an interviewer, the question could be asked as an open-ended question, and interviewers would select appropriate response categories. That approach is not always practical when administered by web; providing open text boxes for the responses that would be coded post hoc by project staff would be cost and time prohibitive when multiple items are administered to large samples.


In earlier web designs, items like the community service question were displayed in a check all format in which respondents and interviewers could select as many options as necessary to provide a response. However, some researchers (e.g., Dillman and Christian, SAPOR presentation, October 4, 2007; Smyth, Dillman, Christian, and Stern, 20062; Rasinski, Mingay, and Bradburn, 19943) have indicated that data completeness is improved when question options each require an explicit yes/no response, rather than that a box be checked only if the option applied. The NPSAS:08 field test included an experiment comparing the check all and radio button formats. The average number of affirmative responses per form by format and completion mode were compared. Overall, the radio button format produced a higher average number of affirmative responses than the checkbox format. The differences in format were statistically significant: “reasons for attending NPSAS institution” (t = 12.69, p < .001), “job affects school experiences” (t = 3.14, p < .01), “reasons for applying to graduate school” (t = 3.79, p < .001), and “reasons for not applying to graduate school” (t = 6.89, p < .001).

But such an administration adds significantly to the burden on respondents who have to check “yes” or “no” for every item in a list. In the NPSAS experiment, on average, respondents took 33.1 seconds to complete “reasons for attending NPSAS institution” in the radio button format, compared with 26.1 seconds in the checkbox format (t = 8.9, p < .001). Respondents also took more time to complete the radio button format than the checkbox format for “job affects school experiences” (t = 2.11, p < .05) and “reasons for applying to graduate school” (t = 4.31, p < .001). “Reasons for not applying to graduate school” was the only set of items that yielded no significant time difference for the two formats overall or by administration mode. The burden imposed on respondents should be weighed against the benefit gained by having explicit yes/no responses for each item.


The experiment designed for the B&B field test will compare different item designs to determine if there is an optimal design, measured in terms of completeness, quality, and burden, for items that were once administered as open-ended questions. Three formats will be compared in both self-administered web and CATI modes. The first two borrow from the NPSAS design: one will display the explicit yes/no format requiring respondents to answer “yes” or “no” for every item in a list. The second format will allow respondents to select only applicable options using a checkbox format. The third format will first ask respondents the question in an open-ended format and provide text boxes in which to record responses. The open-ended question screen will be followed immediately by a second coding screen that re-displays respondents’ text responses and asks them to select the most appropriate option describing their response from a dropdown list provided.


In each case, respondents will be working with the same set of response categories.

Which design is displayed will be randomized and respondents can be administered different designs within the same interview. Results from the three groups will then be compared on completeness (proportion of “yes” responses), quality (nature of the open-ended responses; information lost when coding takes the place of text strings), and burden on the respondent (time to administer each item).


    1. What level of difference does the literature suggest that you might see between the mail treatments.  Will this be within the study's detectable difference?

The NPSAS:08 field test compared participation rates among sample members who received the data collection announcement in a Priority Mail envelope to that of sample members who received the announcement in a regular-sized (#10) envelope sent by First Class Mail. Results showed a significant difference in early interview completion between the two groups: 39 percent of those sent the materials via Priority Mail completed the interview during the early response phase, compared with 33 percent of those who were sent the materials via First-Class Mail (χ2 = 9.22, p < .01).


Before an investment is made to send all B&B data collection announcements via Priority Mail, which is a more expensive mail option than First Class Mail, the B&B field test will compare participation rates when announcements are sent in Priority envelopes and in envelopes of the same size as the Priority envelopes (10 x 13 inches) sent via First Class Mail. If the higher participation rates observed in the NPSAS field test were due to the use of Priority Mail rather than envelope size, one can expect to see about the same rate of difference in B&B (about 6 percent for Priority Mail compared to regular First Class), which is greater than the detectable difference calculated for the experiment (see table 8 of the original forms clearance package).


    1. What outcomes of interest were found to be improved by the use of test messaging?


If successful, text messaging could be used as a quick, inexpensive method for notifying sample members of the start of data collection, and prompting them to respond before the early response period expires and/or data collection ends. In addition, as indicated in the literature, text messages that bounce back will help to identify nonworking cell numbers, saving locating efforts.



    1. What level of difference do you think is reasonable to expect from the text messaging experiment?


As indicated in table 8 of the OMB forms clearance package, it is expected that the design of the text messaging experiment and the anticipated sample size will detect a difference of at least 5.5 percentage points. In order to add to the information about messaging, a companion experiment is being included in the BPS:04/09 field test data collection as well. Although there may not be sufficient information provided by the results of one of these experiments, the two combined could provide strong support for its use in future data collections.


    1. What is the rationale for this experiment comparing a $5 cash and $5 check prepayment when the NPSAS experiment was between a prepayment and non prepayment?  What will be different in B&B to make additional experimentation potentially fruitful?


The NPSAS:08 field test prepayment experiment sent a check for $5 to sample members in the prepayment condition. The idea of sending $5 cash instead of a check arose out of the BPS:04/06 data collection. Some sample members reported not maintaining any sort of bank account and, as a result, had difficulty cashing an incentive check. While a check is more efficient and requires less security prior to mailing, cash may be more convenient for sample members and, therefore, more likely to result in interview participation. The goal is to encourage participation in the interview, not cause respondents excessive burden trying to find a bank willing to cash the $5 check.


The B&B design differs from NPSAS in another important aspect. Rather than offering the prepayment during the nonresponse conversion phase of data collection, as was done in NPSAS, B&B will offer it during the early response phase, at the start of data collection. Because B&B addresses were collected from sample members and were collected within the last year, B&B will begin with more reliable locating information. The prepayment is more likely to reach sample members and, therefore, more likely to have an effect if indeed prepayment increases the likelihood of a response.


    1. What will be different in the proposed production incentive experiment that might produce a larger effect than was seen in the BPS:04/06 experiment? 


The BPS:04/06 field test incentive experiment4 was designed to evaluate whether an incentive offered during the production interviewing phase affected the rate at which sample members participated. At the end of the early response period, interviewers began contacting the remaining sample members (= 1,700) in an effort to have them complete a telephone interview. Prior to data collection, sample members were assigned to a $0 or a $20 incentive group. Excluding all cases that participated during the early response period and all CAPI cases, a total of 18.5 percent of sample members eligible for the $20 response incentive completed the interview. By contrast, a 13.2 percent response rate was attained for sample members who were not eligible for an incentive (z = 2.97; p < .05).


But the effect observed during BPS may not have been as robust as one can expect from a longitudinal data collection because the large majority of the sample was new to the study. The original sample design for the NPSAS:04 field test did not yield enough eligible first time beginners to sustain two BPS follow up data collections. RTI supplemented the sample with over 2,100 students (comprising 81 percent of the field test sample) from institution lists that were not used during the NPSAS:04 field test data collection and who, therefore, had not been contacted or interviewed during the base year. Locating information for students in the supplemental sample was out of date (66 percent were located) and, once located, the students were less likely to be interviewed (68 percent of those located were interviewed).


Production interviewing incentives can be costly to projects. Repeating the experiment with both the B&B field test data collection may provide more definitive results on their effectiveness, or lack thereof.

1 Wine, J., Cominole, M., Wheeless, S., Bryant, A., Gilligan, T., Dudley, K., and Franklin, J. (2006).

2004/06 Beginning Postsecondary Students Longitudinal Study (BPS:04/06) Field Test Methodology Report

(NCES 2006–01). U.S. Department of Education. Washington, DC: National Center for

Education Statistics.


2 Smyth, J., Dillman, D., Christian, L., and Stern, M. (2006). Comparing Check-All and Forced-Choice Question Formats in Web Surveys. Public Opinion Quarterly, 70(1): 66–77.

3 Rasinski, K., Mingay, D., and Bradburn, N. (1994). Do Respondents Really “Mark All That Apply” On Self-Administered Questions? The Public Opinion Quarterly, 58(3): 400408.

4 Wine, J., Cominole, M., Wheeless, S., Bryant, A., Gilligan, T., Dudley, K., and Franklin, J. (2006).2004/06 Beginning Postsecondary Students Longitudinal Study (BPS:04/06) Field Test Methodology Report (NCES 2006–01). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

Supporting Statement Request for OMB Review (SF83i) H-1

File Typeapplication/msword
File TitleFinally, we have done a very preliminary review of the B&B package which arrived at OMB recently and want to ask you a few quest
AuthorJennifer Wine
Last Modified ByEdith.McArthur
File Modified2008-02-07
File Created2008-02-07

© 2024 OMB.report | Privacy Policy