Responses to OMB

Responses to OMB 3rd set STC 3.30.10.doc

Program Review of the Science and Technology Centers (STC): Integrative Partnerships Program

Responses to OMB

OMB: 3145-0212

Document [doc]
Download: doc | pdf

Responses to OMB questions about the Program Review of NSF’s STC Program

1. 1. OMB still thinks that 1000 is too high given the low utility and that NSF should and could go much lower.

Surveys will be sent to 667 former graduate students. With the anticipated response rate of 75%, this will produce a total of 500 completed surveys. A sample size of 500 will result in a margin of error of plus or minus 3.7 percentage points at the 95% confidence level.



for the study was based on the requirement to have a sample selected from each of the 17 centers and to have a margin of error for sample percentages of some characteristics of interest (such as as yes/no on participation in specific STC activities) around plus or minus 3.5 percentage points at 95% confidence level.

2. Okay, except NSF needs to decide whether to consider their contact of former faculty advisors as a separate collection.

NSF does not consider the contact of former faculty members as a separate collection, since this contact is being made only in cases where other methods of tracking down a former student have not worked. We would like to keep this as a potential source of contact information, since the more accurate the contact information for former graduate students, then the more accurate the survey will be. However, if this determination is not acceptable to OMB, then NSF will exclude this as an avenue for tracking down former graduate students.


3. NSF has shown that the frame has enough on it to be useful, but lacks commitment to use it or conduct bias analysis (OMB guidance requires that agencies give us their plan to do it, not just a possibility that they might think about doing it).

We include the plan to conduct bias analysis below, as well as in the supporting statement.


We will examine the bias in estimates because of nonresponse by some graduate students in the sample following the four steps described below. Based on the analysis we will adjust the sampling weights of responding students to account for student nonresponse.


1. Examination of Response Rates

The first step will be to monitor the overall response rate and the response rates in each stratum (Center) and also for some subgroups like gender and race. High response rates (over 80 percent) for the entire sample but also for subgroups might indicate that there is no need for further analysis of bias due to nonresponse. Large differences in the response rates by strata and for subgroups serve as indicators that potential bias may exist. For example, if response rate from an important subgroup is very low then any difference in the characteristic of interest between this subgroup and other subgroups would result in a bias in the estimates. From the survey results we will examine whether there are differences in the characteristics in the subgroups especially in a stratum where the response rate is low.


2. Comparison of Sample and Frame Estimates

We will use the sampling weight based on the probability of selection of responding students without any nonresponse adjustment and the data from these students to compute population estimates of some characteristics available (not used for stratification at the time of selection of schools) on the sampling frame. These estimates will be compared with the population values. If there is a large difference between the estimate and the population after accounting for sampling error, then this may be an indication of the bias in the estimates as these are based only on respondents.

3. Comparison of estimates based on respondents to estimates from external sources.

For questions where there is some data available from an external source for some characteristic of interest (e.g graduation rate), we will compare the estimates from our survey responses to those from nationally available data. A large difference may indicate bias in the survey estimates assuming that the external source provides an unbiased estimate.


4. Nonresponse Propensity Model

Finally, should the response rate fall below 80 percent we will construct a propensity model to estimate the probability of a student in the sample responding to the survey both for responding and nonresponding students; this is called a propensity score. The estimated propensity scores come from a logistic regression model. The model will be based on variables which are available both for nonresponding and responding students. Students will be grouped using the estimated propensity scores. Within each group we will compare the frame characteristics of responding and nonresponding students. This grouping in addition to assessing the bias will also provide a method of forming weighting classes for adjusting the weights of responding students to reduce the bias due to nonresponse.

4. Did the grant come through the PRA. Our concern is that AAS is also contacting the 1000 students as well. Can you clarify whether or not that that is the case. We would not want there to be any duplicative efforts.

The grant is not subject to the Paperwork Reduction Act. AAAS, independent from NSF, is involved in decisions regarding the design of their review of the STC program. The grantee has informed NSF that they will not be contacting former graduate students, therefore there will no duplication of efforts.


File Typeapplication/msword
File TitleA8 should be updated to reflect any public comments as well as the names of the research firms and other experts enlisted to per
AuthorSmithWC
Last Modified BySuzanne Plimpton
File Modified2010-04-13
File Created2010-04-13

© 2024 OMB.report | Privacy Policy