Report on 2006 Activities under SRS Generic Clearance with OMB

Report on activities 2006 06 25 07.doc

NCSES-Generic Clearance of Survey Improvement Projects for the National Center for Science and Engineering Statistics

Report on 2006 Activities under SRS Generic Clearance with OMB

OMB: 3145-0174

Document [doc]
Download: doc | pdf

6


June 25, 2007



Ms. Shelly Wilkie Martinez

Office of Management and Budget

New Executive Office Building

Room 10201

Washington, DC 20503


Dear Shelly:


In December 2003, the National Science Foundation’s Division of Science Resources Statistics generic clearance for survey improvement projects (OMB Number 3145-0174) was extended for three years, with the stipulation that reports be submitted annually, containing indicators of the work that was conducted under the clearance. Between January 1, 2006 and December 31, 2006, work took place on four surveys under projects approved by OMB for this three-year period. The report that follows describes the work conducted for six studies during calendar year 2006. In addition, we have included a section at the end of the report that describes analytic results finished during 2006 for two projects for which the data collection was conducted under generic clearance in prior years.


The methods employed in these generic clearance activities included face-to-face interviews, telephone interviews, and email surveys. The interviews for several of the projects utilized cognitive testing methods to elicit unbiased information about how respondents perceive our definitions, instructions, questions, and response categories. In addition to descriptions of the results of the methods described above, we are reporting on the results of analytic work based on data collected in 2005. As promised in our 2005 activity report, we report on the analytic results from eye movement data collected in 2005. We also report on analytic results from a salary question wording experiment embedded in a large-scale field test conducted in 2005.


The number of respondent burden hours used during 2006 in the six studies described below totaled 1,072 hours. A breakdown by method shows the following: 54 hours for face-to-face cognitive interviews, 45 hours for cognitive interviews by telephone, and 973 hours for the two email surveys. These hours cover the 2,426 individuals who participated in these studies. Typical participation for these individuals ranged from 15 to 30 minutes for the email surveys to thirty minutes to an hour and a half for the interviews.


A brief description of the generic clearance activities undertaken along with their findings is reported below by survey, collection of surveys, or initiative.



1. Survey of Industrial Research and Development (R&D)


As a continuation of the recordkeeping study started in 2005, SRS visited nine (9) companies in the second round of this effort. The first round (conducted during 2005) explored general topics in recordkeeping for R&D activities. This round of interviews focused more narrowly on the financial records that companies keep on their research and development activities. SRS changed the interview protocol for this round to reflect that focus; we targeted larger companies and individuals familiar with the financial records from the company. As in the first round, SRS conducted this research independently of the U.S. Census Bureau, the current survey administrator. That is, SRS did not specifically contact respondents to the current survey. We contacted officials at the top levels of the company with responsibilities for the financial aspects of their R&D activities.


We found companies were more inclined to speak in generalities than to provide specific details about their records. Respondents never showed us their recordkeeping systems even though we asked specifically about the structure of the systems and the types of information kept.


We did learn some useful information that has allowed us to proceed with a redesign of the questionnaire for the survey into a modular format. In particular, we learned that R&D managers rely on project level information more than the company financial reports to monitor and track the progress of R&D. Companies often do not separate research from development because the lines between the R and the D of R&D are blurred. Companies can produce a total amount for R&D performed but the types of costs included vary from company to company. Many companies have collaborative agreements with other entities to perform R&D, especially agreements with academic institutions to conduct basic research. Companies are able to separate out money spent domestically for R&D from money spent in foreign countries.


A third round of site visits with companies will be conducted in 2007 to discuss company records on human resource data and data on intellectual property and patents. We will report results of those interviews in the next annual report.


We estimated the burden for this project using 80 minutes per interview in Round 2. This includes an interview lasting 60 to 75 minutes and respondent preparation for the interview. On average, two company representatives attended the interviews. We estimated the total burden for our 9 interviews to be 1,440 minutes by multiplying 2 company representatives times 80 minutes for a total burden of 24 hours.



2. Survey of R&D Expenditures at Universities and Colleges (Academic R&D Survey)


SRS conducted two projects during 2006 for the Academic R&D Survey that are described below. One was a series of telephone interviews asking respondents how they reported expenditures by field of science and engineering (S&E) and how they used the “Crosswalk” appendix to the survey. The second project was an email survey asking respondents how they reported R&D expenditures for hospitals.


  1. In the spring of 2006, SRS held a series of telephone debriefings to determine how survey respondents classify their research expenditures into the fields of science and engineering. For example, respondents may use an institutional database that identifies fields of science, the National Center for Education Statistics’ Classification of Instructional Programs (CIP), a guide to fields of science provided with the survey, or a combination of methods. Specifically, we asked respondents to describe the classification method they use, the extent to which they use the guide provided with the survey, and if they use the guide, how they use it.


We used several criteria to select institutions for the debriefing. We divided the institutions by size (large or small) in terms of research expenditures and when (early or late) the institution responded to the survey. We selected 15 academic institutions in each of the four categories for a total of 60 academic institutions. We used the two institutional characteristics (time of response and size) because we believed that they might be related to the method of classification the institutions use to classify expenditures.


We used a semi-structured telephone guide to collect data. Once institutions submitted their completed fiscal year 2005 surveys, we telephoned the institutional coordinator in selected institutions and explained the purpose of the call. If the coordinator was not the individual who actually provided the data for the survey, we asked the coordinator to identify the staff at their institution who prepared the data. We then contacted these staff and conducted the interview. Data collection activities took place between February and May 2006.

The debriefings with respondents highlighted that most respondents categorized expenditures by departments rather than CIP codes, and a significant portion of them did not use the NSF crosswalk at all. These findings will be used during our current to redesign the Survey of R&D Expenditures at Universities and Colleges.


We obtained an 87 percent response rate to the debriefing, or 52 academic institutions. Each interview required no more than 30 minutes. Therefore, we used a total of approximately 26 hours for the academic telephone debriefings (i.e., 52*30 minutes = 1,560 minutes).


  1. SRS conducted a brief follow-up email survey with respondents to the Academic R&D Survey in the summer of 2006. We used the results of this data collection to determine the extent to which respondents include research expenditures from all affiliated hospitals in their survey responses. This information will help SRS improve the assessment of possible coverage deficiencies. Specifically, we asked respondents if they conducted any of their research at hospitals and if so, the extent to which they report those expenditures on the survey.


We used an email form to collect data. The email asked all respondents with life science expenditures in FY 2005 whether they conducted that research at affiliated hospitals. If so, we asked them for the names of those hospitals. We also asked how much of the research expenditures they reported for FY 2005 were expenditures incurred for research at the hospitals. Data collection activities took place during July 2006. The survey results showed that most respondents reported all of the R&D expenditures occurring at the hospitals they owned. However, we found considerable variation in reporting expenditures for R&D performed at non-owned hospitals. SRS will also use these findings during the current project to redesign the Academic R&D Survey.


SRS obtained an 87 percent response rate to the email questionnaire that reflects responses from 486 academic institutions. We expected that each academic institution response would require no more than 15 minutes. Therefore, we used a total of approximately 122 hours of respondent burden for this project (i.e., 486*15 minutes = 7,290 minutes).



3. Survey of Graduate Students and Postdoctorates in Science and Engineering (GSS)


SRS planned a series of redesign activities for the GSS including improvement of data quality and minimizing response burden required by the department/program listing and updating activities on the Form 811. Over the summer of 2006, SRS conducted a series of site visits with 30 staff members at eight universities in four states. We interviewed GSS coordinators, department respondents, and others knowledgeable about the data collected for the GSS. Some of the individuals interviewed during the site visits were familiar with the types of data we collect but were not familiar with the GSS itself. We designed the interview protocol to obtain information from the respondents on such issues as institutional definitions of key terms used on the GSS, institutional recordkeeping practices, and the usability of the GSS web survey. In addition, the protocol included questions about the data sources, timelines, and definitions used in preparing the GSS response for the institution. We used a summary report of the site visit findings in preparing two alternatives versions of the listing function on the web survey. We estimate the total burden as one hour per respondent for a total of 30 burden hours.



4. NSF Survey of Science and Engineering Research Facilities (Facilities Survey)


Part 1 of the Facilities Survey requests information on research space. For most Part 1 questions on this survey, respondents are asked to report their data by eleven specified fields of science and engineering. SRS conducted debriefings with 55 respondents of the FY 2005 survey cycle to determine the process that they used to classify their research space into these fields when responding to the survey. For example, respondents may use (1) an institutional space inventory that identifies fields of science, (2) the National Center for Education Statistics’ (NCES) Classification of Instructional Programs (CIP), (3) a guide to fields of science provided with the survey, or (4) a combination of methods. Specifically, we asked respondents to describe the classification method they use, the extent to which they use the guide provided with the survey, and if they use the guide, how they use it. SRS coordinated the methodology and data collection for this project with a similar project for the Academic R&D Survey, described above.


Respondents reported the use of several tools in determining their responses. Approximately 75% used an institutional database to retrieve information on space. Generally, the space in the database corresponded to either the institution’s organizational structure (e.g., academic departments) or the NCES classification system. About half of the respondents indicated that to some extent, they used the fields of science and engineering guide (a classification of fields corresponding to the NCES classification system) provided with the survey. Approximately 80% used their own personal judgment to classify their institution’s space. Some respondents only used their personal judgment and other respondents combined their personal judgment with one of the other methods.


The debriefing telephone interviews required an average of 21 minutes for each of the 55 interviews. Therefore, the total burden for this project was 1,155 minutes or about 19 hours.


5. Project on Postdoctorates in Science and Engineering


SRS fielded the pilot Response Behavior Study (RBS) in 2006 with a sample of departmental respondents from the 2005 Survey of Graduate Students and Postdoctorates in Science and Engineering (GSS). Our goals were to determine (1) whether the GSS is collecting data from the person in the institution most knowledgeable about postdocs and about graduate students, and (2) whether the GSS definition of postdocs differs from institutions’ definitions in ways that lead to reporting errors on the GSS. For the pilot RBS, we used results from our pretest RBS (conducted under generic clearance in 2005) to modify both study design and questionnaire content. For example, we had learned that sampling departments did not work well. If the same respondent was sampled for more than one department, there was a low probability that the respondent’s answers differed for the two departments. Hence, we used a sample of respondents rather than departments for this pilot test. In addition, we implemented an address lookup task for the pilot.


To meet our goals, we collected data about (1) respondent, departmental and institutional characteristics, (2) organizational culture, and (3) self-assessed quality of data provided to the GSS. Among the many results, we found that the current GSS respondent might not be the most knowledgeable person to answer questions about postdocs. We also discovered that the postdoc definition provided by the GSS had very limited use in how respondents actually counted their postdocs. We found many institutional characteristics, such as institution size had no effect on self-assessed data quality. This generalization held across types of institutions (medical school or not), geographic regions, and levels of urbanization. However, the respondent’s familiarity with the data entry process and the availability of information on data source updates was related to respondents’ assessments of data quality. Given the findings, possible next steps include developing and testing an improved postdoc definition and improved screening methods to identify the correct respondent for postdoc data.


We targeted GSS departmental respondents, yielding 2,496 unique respondents and achieving an overall response rate of 71.5%. The questionnaire had a mean completion time of 28.6 minutes. The total burden hours are calculated at 851 and the total individuals are calculated to be 1,785.



Results from generic clearance activities conducted in prior years


This report also includes 2006 analytic activities from two projects conducted under SRS generic clearance in prior years. There is no associated burden to include in this report because one project’s 2006 field work was conducted under the survey’s OMB clearance and the other project’s field work was completed in 2005.


  1. Under generic clearance, SRS planned a three-phase study for the Survey of Earned Doctorates (SED) to determine how best to ask a salary question of new PhD recipients. In previous years, SRS reported to OMB that we conducted six focus groups and 17 cognitive interviews to elicit respondents’ reactions to adding salary questions to the SED. The key recommendations were:

a) The proposed salary question should ask for base salary, with instructions to exclude summer-time research and bonuses, and should provide categorized response options in $10,000 ranges.

b) An additional question is needed (within the proposed question) to determine whether the salary is a 12-month salary, and if not, the number of months the salary covers.

NSF and the other survey sponsors reviewed these recommendations in 2006. Subsequently,  SRS obtained approval through the survey’s regular OMB clearance to conduct a field test in 2006. SRS designed this work to test which question and ranges would provide the best data on salary for new PhD recipients. We used four different versions that varied the question itself and the location of the question within the survey instrument. We concluded from the field test results that it would be best to use an open question first, followed by a question with ranges for respondents who do not wish to provide a specific salary. Another question will be needed on length of salary contract, an item needed for faculty members.

  1. In the 2005 generic clearance report to OMB, SRS summarized field work conducted under the generic clearance using eye-movement studies for the Survey of Graduate Students and Postdoctorates in Science and Engineering (GSS). The eye-movement study that SRS conducted in 2005 tested a new prototype against the existing version of a series of web pages used for the GSS. We recruited twenty-four administrative assistants from the kinds of university offices that typically answer the GSS. We sought to represent real respondents to the GSS as closely as possible without burdening actual respondents. We asked subjects to role-play as though they were the GSS institutional coordinator of a small fictitious institution. Half of the subjects answered the original version of the survey and half answered the redesigned version.


The analysis of the eye-movement data revealed that subjects navigated more successfully through the redesigned survey module. However, the analysis also revealed a need for further improvement in the redesigned module. SRS is incorporating the results of this analysis on what worked well and on what needed further improvement into the GSS redesign activities.


We feel that we have been successful in accomplishing methodological tasks that improve the quality of SRS surveys, and that future activities under the generic clearance will continue to be instrumental for improving our surveys.


Sincerely yours,

Fran Featherston

Division of Science Resources Statistics

National Science Foundation


cc:

S. Plimpton (IRM/DAS)

L. Carlson (SBE/SRS)

M. Frase ( ‘’ )

J. Jankowski ( ‘’ )

N. Leach ( ‘’ )

S. Cohen ( ‘’ )



File Typeapplication/msword
File TitleNATIONAL SCIENCE FOUNDATION
Authorcredline
File Modified2007-07-03
File Created2007-07-03

© 2024 OMB.report | Privacy Policy