Annual report for 2009 Activities under SRS Generic Clearance

Report on 2009 generic clearance activities FINAL 06 01 2010.doc

NCSES-Generic Clearance of Survey Improvement Projects for the National Center for Science and Engineering Statistics

Annual report for 2009 Activities under SRS Generic Clearance

OMB: 3145-0174

Document [doc]
Download: doc | pdf

4


June 1, 2010



Ms. Shelly Wilkie Martinez

Office of Management and Budget

New Executive Office Building

Room 10201

Washington, DC 20503



Dear Shelly:


In December 2006, the National Science Foundation’s Division of Science Resources Statistics (SRS) generic clearance for survey improvement projects (OMB Number 3145-0174) was extended for three years, with the stipulation that reports be submitted annually, containing indicators of the work that was conducted under the clearance. Between January 1, 2009 and December 31, 2009, SRS conducted six studies under projects approved by OMB for the SRS generic clearance for this three-year period. Projects covered redesign issues for current surveys and exploratory work for the Postdoc Data Project and Microbusiness R&D. The report that follows describes the work conducted for studies during calendar year 2009.


The methods employed in these generic clearance activities included face-to-face interviews, web surveys, and interviews over the phone. The interviews for the projects utilized cognitive testing methods to elicit unbiased information about how respondents perceive items such as definitions, instructions, questions, and response categories.


The number of respondent burden hours used during 2009 in the six studies described below totaled 478 hours. A breakdown by method shows the following: 198 hours for face-to-face interviews, 274 hours for web surveys, and 6 hours for telephone interviews. These hours cover the 670 individuals who participated in these studies. For some studies, the amount of time was as little as four minutes and for others as much as 2.5 hours. Typical participation for these individuals ranged from 30 minutes for a web survey to two hours for face-to-face interviews during cognitive testing. Except for the Postdoc Data Project, we did not track additional time required to screen respondents during 2009. However, we estimate about 10 minutes per respondent for a recruitment phone call for 139 face-to-face and telephone interviews, or 23 additional burden hours. This estimate includes the time spent with both respondents and potential respondents who were screened out of the process. For the Postdoc Data Project, approximately 130 burden hours were spent screening potential eligible organizations and gaining their cooperation. The inclusion of these 153 hours of screening burden brings our total burden hours for the SRS generic clearance to 631 hours for 2009.


The 2009 calendar year was a time of moving earlier projects further along. Through a variety of methodological tasks, SRS staff continued to pursue improvements in the quality of current and potential SRS data collections. SRS expects that future activities under the generic clearance will continue to contribute to increasing data quality and improving the survey experience for respondents.


Brief descriptions and findings of the 2009 generic clearance activities by survey or initiative follow.


Higher Education Research and Development (R&D) Survey


1. Cognitive testing of redesigned questionnaire


In January and February of 2009, SRS and Westat staff visited 13 institutions to conduct three rounds of cognitive testing for the revised questionnaire. They interviewed the current survey respondent and those who provide survey data to the respondent. These included accounting or budget offices, sponsored programs offices, and institutional research offices. The survey questionnaire for the FY 2009 pilot test incorporated findings from these interviews.


The cognitive interviews averaged 2 hours in duration for 70 individuals, for an estimated 140 burden hours during calendar year 2009.


2. Web usability testing of survey website


In the summer of 2009, SRS and Westat staff conducted 15 web usability tests in person or over the phone using software hookups. These tests were conducted with the survey respondents and others closely involved in collecting and reporting the survey data. The web survey questionnaire for the FY 2009 pilot test incorporated findings from these usability tests.


The usability tests averaged 2.5 hours in duration for 30 individuals, for an estimated 75 burden hours during calendar year 2009.


Survey of Graduate Students and Postdoctorates in Science and Engineering


In July and August 2009 RTI conducted two rounds of in-person interviews for cognitive/ usability testing related to the development of an expanded postdoc component for the GSS. The purposes of these interviews were to explore respondents’ understanding of possible additional items about postdocs and to gain insights into strategies for identifying an appropriate person within an institution to be the coordinator for data about postdocs.


Participants were recruited from 11 universities and colleges (institutions) in the Houston and Detroit areas. Institutions were selected so that there would be variation on key characteristics likely to be related to 1) the extent of postdocs in an institution and 2) how records on postdocs are maintained within the institution. Participants included individuals with different roles in providing data to the GSS: postdoc coordinators who would provide only postdoc data, school coordinators who oversee both graduate student and postdoc data, student coordinators who only provide graduate student data, or unit respondents.

There were 14 participants in the interviews, and a total of 26 burden hours.


The results of the interviews informed the preparation of a web survey and contact strategies for a pilot of an expanded postdoc data collection that was fielded in spring 2010.


Postdoc Data Project (PDP)


  1. Pretest of the PDP Questionnaire

In the fall of 2009, SRS conducted a pretest of a questionnaire to test content with early career researchers including foreign-degreed postdocs and MD/PhD postdocs. The split-panel test contained alternative formulations of questions. The results of the split-panel tests will be used to design the appropriate measures for the next phase of the Postdoc Data Project.


The sample consisted of 1,000 individuals funded by NSF (n=500) in 2006 and NIH (n=500) in 2006 and 2008. These samples had the advantage of including a number of foreign-degreed postdocs across a range of disciplines in the science, health, and engineering fields. Of the 1,000 respondents sampled, 375 completed the pretest (37.5 percent). There were 218 completions from the NSF sample (43.6 percent) and 157 from the NIH sample (31.4 percent).


A key finding for both the NSF and NIH samples was the poor quality of available contact information after three years. In many cases, postdocs took new positions or moved out of the country, so that it was impossible to find email or postal addresses. Even NSF contact information gathered in March of 2009 was outdated by the fall when respondents were contacted.


These pretests averaged 31 minutes for 375 respondents, for an estimated 194 burden hours.

2. Nonprofit List Assessment

In January 2009, SRS conducted a second list assessment study to test the usefulness of the IRS Form 990 (tax form for nonprofit organizations) as a sampling frame for surveying postdocs. For the first study (conducted in fall 2008), IRS data were purchased from the Urban Institute’s database. The first study showed that the contact names on the list (board members, trust managers, etc.) were persons who did not have sufficient access to postdoc information. For the second study, staff used the same sample of organizations in an effort to find an appropriate contact in human resources or personnel functions. After an initial contact by mail, the new respondent was surveyed either by web or by telephone interview between late January and late April 2009.


In total, 1,309 organizations received invitations to participate in the second study in 2009. Eighteen percent (239 organizations) were found ineligible early in the contact protocol. Of the remaining, 1,070 organizations, 156 responded to the study (22 percent).

The conclusion from the two studies is that the IRS list maintained by the Urban Institute lacks the minimum quality needed for a postdoc sampling frame. The original contact information on the file is insufficient. It does not provide the optimal contact person in human resources, nor does it provide any email address to streamline contact efforts. Incorrect and outdated information accentuate problems with false attempts to reach organizations. Further efforts were unsuccessful in developing a strategy for locating appropriate contacts using this list.


The list assessment interviews averaged four minutes to complete for 156 respondents, for an estimated 11 burden hours. However, the screening to identify and recruit the 156 participants involved extensive phone calls. An estimated 130 burden hours were used in the recruitment process.


Microbusiness R&D Survey


NSF initiated a study in 2007 to determine the feasibility of surveying U.S. companies with less than five employees on their R&D and other innovation-related activities. During Phase 1, the NSF conducted twenty-five presurvey interviews from February 2008 through August 2008 to determine if small companies could answer the types of questions on NSF’s RD-1 form. In addition, the interviews covered a set of innovation questions. Using the findings from Phase I and incorporating questions from NSF’s newly designed 2008 Business R&D and Innovation Survey, NSF drafted a questionnaire for cognitive testing with 20 small companies during the spring of 2009. The interviews were conducted in the Washington, DC and Miami, Florida, metropolitan areas.


Respondents in the first and second rounds had problems with the more detailed scales, with certain skip patterns, and some of the questionnaire’s content. The respondents estimated that they could complete the survey in twenty to thirty minutes. Furthermore, they would prefer to complete the survey online if that were offered as a choice. The revisions to the second round questionnaire improved the instrument significantly, but respondents still exhibited problems with a quarter of the survey questions. Additional testing will be needed before finalizing the questionnaire for this survey.


The interviews averaged 1.25 hours per interview for 25 respondents for an estimated 32 burden hours.


Sincerely yours,

Fran Featherston

Division of Science Resources Statistics

National Science Foundation


cc:

S. Plimpton (IRM/DAS)

L. Carlson (SBE/SRS)

M. Frase ( ‘’ )

J. Jankowski ( ‘’ )

E. Rivers ( ‘’ )

S. Cohen ( ‘’ )



File Typeapplication/msword
File TitleJune 25, 2006
Last Modified Byffeather
File Modified2010-06-01
File Created2010-06-01

© 2024 OMB.report | Privacy Policy