FY 2009 Higher Education R&D Survey Pilot Test Debriefing Report

Attachment 9 HERD debriefings report 06-14-10.docx

Survey of Research and Development Expenditures at Universities and Colleges, FY 2006 through FY 2008

FY 2009 Higher Education R&D Survey Pilot Test Debriefing Report

OMB: 3145-0100

Document [docx]
Download: docx | pdf

ATTACHMENT 9

FY 2009 Higher Education R&D Survey pilot test debriefing report




Findings from the Higher Education Research & Development (HERD) Survey Pilot Test Debriefing Interviews



June 14, 2010

National Science Foundation

Arlington, Virginia


Prepared by:

Westat

1600 Research Boulevard

Rockville, Maryland 20850-3129

(301) 251-1500




Table of Contents

Section Page




Introduction

Overview of the Pilot Test and Debriefing Interviews

NSF selected 40 academic institutions for participation in the pilot test of the Higher Education R&D (HERD) Survey. Respondents from these institutions were contacted in June 2009 and informed of their selection, and asked to participate in the pilot test (in lieu of their regular submission using the FY2009 version of the Academic R&D Expenditures Survey). The pilot test survey was sent to institutions in mid-November 2009 and respondents were asked to submit their response by February 26, 2010.


After submitting their responses, respondents were contacted and asked to complete a 2-hour debriefing interview about their experiences collecting the data, preparing their response, and submitting the data using the redesigned survey.


Goals for the Debriefing Interviews

The goals for conducting the debriefing interviews were to assess respondents’ reactions to and feedback about the following:


  • Content of the new questions;

  • Revisions to existing questions;

  • Burden in preparing the response;

  • Issues with question wording;

  • Ease/difficulty of data retrieval; and

  • Questions for which data were not available, rather than zero, as reported on the survey.


In most interviews, respondents were asked about any difficulties they encountered with the web application. All respondents were asked to describe what they liked most and least about the revised survey.





Debriefing Content and Procedures


Debriefing Content


Westat developed a protocol which served as both:


  • A tool to review each institution’s submitted data and prepare for the interview – to identify patterns of response, data trends over previous years, and idiosyncratic issues to address during the course of the interview, and

  • An interviewer’s guide for the interview flow – with skip patterns based on which topics applied to the institution (e.g., presence/absence of medical school expenditures).


The protocol was ordered to reflect the following priorities for coverage of topics during the interviews:


  • New survey questions

  • Revised survey questions

  • Web application (lowest priority due to recent round of usability testing)

  • Summary experience of completing the new survey


NSF reviewed several draft versions of the protocol and a final version was approved before the start of the interviews in mid-February 2010.


Debriefing Procedures


Westat staff contacted respondents after they submitted their survey, explained the purpose of the interview and any need for additional participants, and scheduled a 2-hour time block at the convenience of the respondent. At the scheduled interview time, Westat staff members (interviewer and note-taker), one or more NSF representatives, and the respondent called into a conference number. The Westat interviewer explained the objectives for the interview and administered the consent procedure.


The Westat interviewer administered the protocol as tailored for each institution. Respondents were asked to refer to their copy of the completed questionnaire as they responded to interview questions. When responding to questions about the web application, respondents were asked to log into their completed survey. (Westat staff provided log-in access just prior to the scheduled interview time.)


Westat conducted debriefing interviews with 39 of the 40 institutions. Not all questions were covered with all respondents, due to the tailored protocols or, in a few cases, respondents’ time constraints. One respondent was scheduled for an interview but then postponed several times. The NSF HERD survey manager contacted the respondent and covered several of the main topics of interest in a short interview, then forwarded the information to Westat so that the responses could be integrated with data from all other institutions.


This report summarizes the results of the debriefing interviews. The initial sections describe respondents’ overall reactions to the revised survey and changes that affected reporting of total R&D expenditures. Subsequent sections summarize the results for each survey question addressed during the interviews.


General Reactions to the Survey

Respondents were asked for their general impressions of the revised HERD survey at the beginning of the debriefing interview. Comments ranged from overall feedback about completing the survey to impressions and issues about specific survey questions. In a number of cases, respondents brought up issues that were slated to be covered later in the interview.


The most prevalent general comment was that the revised questionnaire was considerably expanded (compared with the current Academic R&D Survey). Many respondents also said that the revised survey generally required more time and effort to complete. A few respondents commented that while the survey seemed to ask for an overwhelming amount of information at first glance, the effort involved in completing it was less than they expected. One of these respondents (from a small public institution) said that while her first reaction was: “I thought I would never complete it,” the organization of the questionnaire made it easier to complete compared to the previous version of the survey. A few respondents said that the revised survey required about the same level of effort as the previous version. One respondent said that the revised version was much easier to fill out.


Roughly a dozen respondents said that they had most of the information needed to complete the survey readily at hand. About half as many said that responding to the survey required culling information from other university administrative offices. In at least one case (a large public institution), information had to be retrieved from individual academic departments. Several respondents noted that they had to write new database queries or access information stored in databases maintained by other offices to respond to certain items. Approximately a quarter of the respondents stated at some point during their interviews that responding to one or more of the survey questions would be easier in future years because they had established methods to automatically pull the data or because they were in the process of implementing new financial database software that would allow them to respond more efficiently. However, a few smaller institutions indicated that responding to the survey would likely remain a manual task for them, as they did not have enough R&D expenditures to justify setting up complex data systems.


Respondents cited a number of questions as especially difficult or problematic. New questions, such as those on personnel, proposals/awards, and cost elements of R&D were the most frequently mentioned; respondents said that their systems were not prepared to provide these data. Several respondents reported difficulty with Question 6 (basic research, applied research, and development). One respondent mentioned that including clinical trials caused them a great deal of trouble. One respondent said that the separation of questions 11 and 13 was confusing, while another said that it was initially unclear that Question 9 asked for only federal sources of R&D expenditures. This respondent suggested labeling the question more clearly as federal sources only. At least 4 respondents said that reporting institutional funds was challenging. One respondent said the most difficult thing was to determine what institutional funds to include. Several institutions noted that they made estimates for difficult items or left items unanswered.


At least 1 respondent commented that it would be useful to have more information about why the revised survey requests more detail than the previous version. A couple of respondents wondered how comparable the data are from institution to institution. One of these respondents (from a large public institution) was especially concerned about the consistency of institutional funds data across institutions, and suggested that NSF publish a ranking table that excludes institutional funds. At least 4 respondents commented that the additional information requested in the revised survey would be useful to them for their own internal monitoring and reporting purposes.



Reaction to Including Non-S&E Fields

Respondents reported few problems with the instruction to report for all fields of R&D, including non-S&E fields, in all survey items. Most respondents said that expenditures in non-S&E fields were readily available in their data systems and easy to report, although a few noted that they had to modify the queries used to extract the data. Several noted that their queries for the previous survey already included non-S&E fields, which they previously had to “back out” before reporting for just S&E fields. One respondent said that the non-S&E fields were already included in internal reports of R&D activity, so it was easy to include them on the survey. This respondent also noted that it was valuable to include non-S&E fields, as it provides a more complete picture of R&D at the institution. This positive reaction was voiced by approximately a half dozen respondents.


One respondent commented that the list of non-S&E fields was useful in responding for all fields of R&D. A medium size public institution indicated that they currently could not break out R&D by field, so the figures reported included non-S&E fields by default.



Reaction to Including Clinical Trials

Most institutions that conduct clinical trials research indicated that including clinical trials did not pose significant reporting challenges. Six respondents commented that including clinical trials in counts of R&D expenditures was fairly easy, and some noted that clinical trials are easily identifiable in their financial database through codes or account numbers. Two respondents indicated in the interview that they had included clinical trial expenditures in previous survey submissions. Another 2 respondents said that they had to obtain information from another office in the university to report clinical trials data. One respondent said that having to report clinical trials for the NSF survey will help the institution to track them. A respondent from an institution which is starting to expand clinical trials work said that the tracking will be set up and housed within his office.


A few respondents reported problems or concerns with reporting clinical trials. One respondent (from a large private institution) said that it would be helpful to clarify if clinical trials included “clinical studies,” which typically “occur in a lab setting rather than a patient care setting.” The respondent also indicated that a more precise definition of clinical trials would be helpful, particularly how to classify whether a study with human subjects is a clinical trial. Two respondents believed that clinical trials do not fall under the A-21 definition of organized research. One of these respondents was unwilling to report clinical trials on the survey for this reason. Another respondent was unable to report on clinical trials separately because they are not separately tracked in the financial system.


One respondent indicated that clinical trials were separated by investigator-initiated and externally commissioned trials in their records. The respondent said that in past survey responses, she had included only investigator-initiated trials, but had added externally commissioned trials for this year’s response. Another respondent indicated that they had mistakenly reported awards for clinical trials instead of expenditures. A respondent from a small private medical institution noted that it can be difficult to determine how much revenue is received from a sponsored clinical trial; trials are typically funded according to the number of patients. This respondent said that while capturing revenue was difficult, reporting expenditures for clinical trials was not difficult.



Reaction to Including Research Training Grants

Most respondents stated that including research training grants was easy to do and they supported including them on the survey. Eight respondents indicated that they had included research training grants in previous survey responses, so they did not need to change their reporting for the revised survey. Seven said that research training grants are separately coded in their data system, making them easy to include or exclude in reports. Two respondents said that they did not have any research training grants in their records. One of these respondents plans to pursue research training grants in the future.


Eight respondents were unaware of the instruction to include research training grants. After the instruction was explained to them, a few indicated that they were unsure whether they had any research training grants or whether they had been included in previous years. One respondent from a large public institution thought that research training grants were likely included in NIH funds.


Some respondents seemed confused about what to count as a research training grant. One respondent indicated that some education and other non-research training grants had been included in the survey response. This respondent said that the institution has a code for separately classifying research and other training grants, but coding is not uniformly applied, so the respondent reviewed all training grants in the institution’s database to determine which to include. The respondent thought that an additional instruction on what types of training grants to exclude would have helped. Another respondent said that the institution’s training grants were tracked by another office and classified as public service, and was unsure whether the grants were for research training or not.




Question 1: How much of your total current fund expenditures for separately budgeted research and development (R&D) came from the following sources in FY 2009?

Question 1d. Nonprofit organizations

Data Availability

Of the 40 institutions, 39 answered Question 1d. Only 1 respondent did not provide an answer and said that they could not disaggregate nonprofit from private. Two institutions provided a response of zero and said that they could produce the requested data, but simply did not have any grants from nonprofits in fiscal year 2009.


Ease/Difficulty of Obtaining and Reporting Data

Fourteen institutions reported having a code for nonprofit. For the institutions that had codes, the data retrieval was easy. The following are paraphrases of comments that respondents made about ease of reporting.


  • Easy to report.

  • Not difficult because the sponsors are categorized.

  • This was easy to get.

  • Very easy. We already track that.

  • It is fine. We have a source code for federal, state, nonprofit; it’s simple and easy to get.

Ten of the institutions reported that it was a manual process to tease the nonprofits out of other data categories. Manual processes were usually considered to be difficult, inexact, and/or time-consuming, as described below.


  • One institution said that their system reported 501c3 organizations only. So their number only represented part of their total “nonprofits.”

  • Another institution said that while they could produce the information, it was a time-consuming, tedious process.

  • Another institution said that they had to revert to the paper files to break out the nonprofit grants.

  • Another institution said that they reviewed the “private” sources, and then made a determination as to which ones were nonprofits.

  • Another institution said that while it was a manual task, it was still not difficult to make the determination. This institution had only 28 grants and only 2 of these were private. It was easy to review 2 grants to determine whether they were from nonprofit sources.

  • Another institution had a “foundation” code in the database and reported using that.


Data Repository

The larger institutions seemed to have codes for type of sponsor whereas the smaller institutions did not have codes and providing the information was a completely manual task.


Match Between Actual and Desired Response

All of the institutions seemed to define the term “nonprofit” in a way identical to the intent of the question. There was no confusion or ambiguity about “nonprofits.” None of the institutions commented that they did not know what a nonprofit was or that they used the term nonprofit in a different way from the survey.


Other Issues

Thirty-one of the 40 respondents were asked whether they used the term “private” in their records. Of the 31 respondents who commented, only 8 said that they used the term “private” in their records. Sixteen respondents said that they used the term “industry” or “business.”

Twenty-four of the 40 institutions provided comments on how they reported nonprofits in previous years. Of the 24, 7 respondents said that they had reported nonprofits in the “other” category. Two further explained that they had reported nonprofit with business/industry and 1 said that they had not reported nonprofit at all in previous years. Another two respondents made the additional comment that they could not remember or did not know how they had reported nonprofit in previous years.




Question 1e2: Cost sharing

Twenty-eight of the institutions reported a non-zero figure for Question 1e2, Cost Sharing. Of the 12 institutions that reported a zero for this question, 4 were true zeros in that the institution can track cost sharing, but they simply did not have any cost sharing for the reference year. Five institutions said that they do not track cost sharing and that the data were not available. Three of the institutions that reported a zero for cost sharing did not provide a comment that explained their zero.


Data Availability

Of the 28 institutions that reported a figure for Question 1e2, Cost Sharing, 11 said that they retrieved the figure from a database based on an attribute or some type of code. The following are paraphrased comments made by respondents from the 11 institutions.


  • attribute in the database – tracked separately. Easy to report.

  • We have subaccounts in the system where we separately report cost sharing.

  • This is easy to do. We have a code for cost sharing in the system.

  • When awards are set up, there is a companion award set up to track cost sharing; easy to get.

  • There is an account code for cost sharing. We pulled the information from this code; it wasn’t hard.

  • Cost sharing is come thing that we track.

Another 11 respondents said that the figure they reported was the result of a manual exercise. Three respondents said that their figure was either an estimate or an under-report. These respondents made the following types of comments:


  • We had to build a couple of Excel spreadsheets for that.

  • A little more manual work to this one. … I pulled all cost sharing accounts for everything and then tied back to the original grant so I could make sure I got only research function amounts.

  • We probably underreported this.

  • This was a manual process, somewhat difficult.

  • We have a manual cost sharing system; we have an Excel spreadsheet where we track it.

  • This was a little bit difficult to arrive at, and it’s probably an underestimation.


Ease/Difficulty of Obtaining and Reporting Data

Thirteen respondents said that cost sharing was easy or relatively easy to report. Seven respondents said that cost sharing was difficult to report. The seven smallest institutions (least R&D) either reported a genuine zero or said that it was not something that they tracked. There was a clear division between the large institutions and the small institutions. The large institutions had IT systems in place that could produce the numbers automatically and the small institutions either had a manual process, no tracking at all, or did not track and did not have any cost sharing.


Match Between Actual and Desired Response

All institutions seemed to have a clear understanding of what cost sharing was. No institutions seemed to have an idiosyncratic interpretation of cost sharing.



Question 1e3: Unrecovered indirect costs

Of the 40 institutions, 27 provided a figure for unrecovered indirect costs. Five of the respondents who did not provide a figure said that they did not track unrecovered indirect costs and that this number was not available.


Data Availability

Of the 40 institutions, 24 commented on unrecovered indirect costs. Most respondents said that the unrecovered indirect costs are something that is calculated based on financial agreements in a particular grant or contract. There was some variation in the process that the respondents described. The paraphrased statements shown below provide some flavor of the comments about this question.


  • We followed the instructions and looked at how much recovery we had against our negotiated rate. For subcontracts, we only collect on the first $25,000.

  • We take all expenditures, figure the indirect costs recovered, take modified direct costs and multiply by the highest overhead rate. The difference in the product and recovered indirect is unrecovered.

  • We did the calculation – what was recovered versus what could have been recovered.

  • For each grant, we took the negotiated rate, multiplied by modified direct costs, subtracted indirect costs recovered. This is fairly easy to do.

  • It was optional in previous years … due to time constraints we weren’t able to do it.

  • Unrecovered indirect costs is an object in our database; we just pulled it.

Ease/Difficulty of Obtaining and Reporting Data

Of the 24 institutions that commented on unrecovered indirect costs on line 1e3, 4 respondents said that they tracked unrecovered indirect costs, 15 said that they had to calculate unrecovered indirect costs, and 5 did not track it and did not provide an estimate. Of the 15 who said that they calculated unrecovered indirect costs, two respondents mentioned explicitly that they considered the number that they reported to be an estimate.


Data Repository

There did not seem to be a clear “data repository” for unrecovered indirect costs since it was a derived number for most respondents. Most respondents either said or suggested that they used the accounting systems data to calculate the unrecovered indirect costs.


Match Between Actual and Desired Response

Almost all of the respondents who commented understood what unrecovered indirect costs were. The respondent for one small institution expressed some confusion about the actual meaning: they thought unrecovered indirect cost meant something that they had billed for but not received.





Question 1f. Other sources not reported above, such as funds from foreign governments

Thirty institutions commented on the types of R&D expenditure sources reported in Question 1f “All other sources.” Most institutions described a process where they start with their total R&D expenditures and then divide them into the various reporting “buckets” like federal, state, and local governments. Once those large categories are determined, they work their way through the other sources of funds listed in Question 1: business, nonprofit, and institutional. Once all of the other categories are completed (i.e., Question 1a through Question 1e4), any remainder is reported in Question 1f. This category was used as a catch-all for expenditure funds that did not fit elsewhere.


When asked what was reported in this category, respondents made the following types of (paraphrased) comments:


  • We pulled out the federal, state, and local. Then everything else falls into a bucket. After we pull out institution, foundations, and industry, we ended up with $22 million. We’re not sure what that is comprised of.

  • ended up being the foreign sources; That was the only thing that was left that wasn’t broken out.

  • Not sure. Backed out for this category. Whatever was left over.

  • 7.1 million. Don’t know exactly what is in there. We had to break out nonprofit from “other.”

  • Foreign government would be there. I don’t believe there’s anything else, but I’m not positive. Gifts are not included.

  • Anything that didn’t fit in other categories. About 30 different sources in this category.

  • Drug company funding for clinical trials.

  • Some of what was reported here was “local funds” – department expenditures. Those might have been better under institutional funds, but was unsure about whether it was organized research.

Fourteen respondents commented on the expenditure sources that had been reported in All other sources in previous years. Seven of these respondents said that they were not able to say what had been reported in All other sources in previous years without first reviewing those data. Most respondents that commented said that in the past, nonprofits and foreign sources were reported in All other sources. One respondent erroneously reported federal flow-through in all other sources. The respondents’ comments included the following paraphrased statements:


  • That might’ve been where our nonprofits were in the past.

  • I think in previous years I was putting federal flow-through sponsors into this. I didn’t notice the note that said to please put the federal flow-through into the agencies. … I think we may have put foreign into All other sources, now foreign business would go into line c. The clearer definition of federal flow-through I think made a big difference.

  • Another institution said that they previously reported nonprofit in All other sources. Now they report the nonprofit on line 1d.

  • In the past we reported nonprofit organizations in All other sources.




Question 2: How much of the total R&D expenditures reported in Question 1 came from foreign sources?

Data Availability

Of the 40 institutions, only 1 reported that data were not available to report foreign sources. All other institutions reported either an actual number or a true zero. The institutions that reported zeros said that they were sure that they had no foreign sources to report.


Ease/Difficulty of Obtaining and Reporting Data

Fourteen of the 40 institutions said that they had a code that indicated foreign funding source. Some had very intricate codes for foreign and could identify foreign governments, foreign corporations, foreign foundations, foreign charities, and so forth. Others could only identify foreign government sources and would not be able to indentify foreign corporations or industry. Other institutions reviewed the award letter or the contract to determine foreign source. Ten institutions said that they used the address to determine foreign source. One institution said that the name of the grant was reviewed to determine foreign status. A few said that foreign status was something that they would just know because they have so few foreign sponsors.


Data Repository

Some institutions said that they ran queries in the financial system to produce the data. Other institutions said that the data resided in the pre- or post-award offices and these offices provided the data.


Match Between Actual and Desired Response

Most institutions reported that they used the same definition of foreign as given on the survey. Some institutions were not able to break out all types of foreign sources. For instance, 2 institutions were very explicit that they could report on foreign government agencies, but not foreign corporations or foreign industry.

Question 3: Of the total R&D expenditures reported in Question 1, row g, how much was expended for R&D projects in your medical school?

Data Availability

Thirteen of the pilot institutions reported having medical schools. All 13 of these ranked within the top 21 pilot institutions based on R&D expenditures. They all reported their medical school expenditures for this question; responses ranged from $21,000 to over $550 million. None of the schools provided comments for this question when they submitted their survey.


Ease/Difficulty of Obtaining and Reporting Data

Respondents reported that it was not difficult to report these expenditures. To identify what to include for medical school expenditures, they used some type of medical school account code in conjunction with a code for R&D expenditures.


Data Repository

The data are available in databases that contain R&D expenditures. Respondents reported being able to separately identify medical school expenditures; institutions track these at the department, school, or college level.


Match Between Actual and Desired Response

Respondents did not report any issues with providing the data during the interviews.

Question 4: Of the total R&D expenditures reported in Question 1, row g, how much was expended for Phase I, Phase II, and Phase III clinical trials?

Data Availability

The question asked for expenditures for human clinical trials and veterinary clinical trials. Of the 40 institutions, data were reported as follows.


  • 12 institutions reported having human clinical trials (two of these do not have medical schools)

  • 2 institutions reported having veterinary clinical trials

  • 2 institutions reported that data were not available for either human or veterinary clinical trials

Ease/Difficulty of Obtaining and Reporting Data

Respondents from 8 institutions said that they have methods for tracking clinical trials expenditures. Four respondents reported that they cannot separately identify clinical trials expenditures. Two institutions that cannot separate them reported “Data not available.” One of these institutions categorizes clinical trials expenditures as “other sponsored activities.” However, the respondent said that they could work towards using an attribute in their system and reporting the expenditures in the future. The other institution said clinical trials are not classified as part of their research base, but as other sponsored projects. The respondent said that he would not want to commingle clinical trials with research activities and report clinical trials within their research expenditures.


One institution had just started a medical school within the year and did not report any dollars for FY2009. However, they are in the process of recruiting faculty and negotiating to have a clinical trial so expect to be able to report a clinical trials amount in the future.


Match Between Actual and Desired Response

Respondents were asked whether their institutions have Phase IV clinical trials, as a check of whether they were able to separately report Phases I – III as requested for Question 4. Of the 12 respondents who reported a figure for clinical trials, the breakdown was as follows.


  • 2 respondents were not sure if Phase IV clinical trials are conducted at their institutions

  • 5 respondents have (or have had) Phase IV clinical trials

  • 2 respondents expect to have clinical trials, and possibly Phase IV trials, next year

  • 3 respondents said their institutions did not conduct Phase IV trials

Clinical trials phases are not separately identified in the records of 8 of the institutions. Therefore, respondents either were not sure if Phase IV clinical trials were included in their reported numbers or were not able to exclude any Phase IV trials that may have been conducted. The respondent at an institution which reported a high volume of clinical trials expenditures explained that they have “combination Phase III/IV” trials and it would be difficult to separate them because they would not be able to allocate percentages between Phases III and IV. They provided an example of a combination III/IV trial as one which examined the effectiveness of a combination therapy delivered with an approved device and a not-yet approved drug.


Other Issues

Respondents raised several questions during the debriefings. One respondent seemed confused; he commented that clinical trials did not include all human subjects research and asked whether that was what NSF wanted to collect with this question. Another respondent noted that it is possible to have clinical trials outside of a medical school (e.g., in nursing, physical therapy, etc).

Question 4.1: Did you include R&D expenditures for (a) human clinical trials, or (b) veterinary clinical trials in your FY 2008 (previous year’s) survey response?

Data Availability

Of the 40 pilot institutions, the breakouts for inclusion of human clinical trials and veterinary clinical trials in institutions’ FY2008 responses were as follows:


  • Human clinical trials: 6 included, 7 did not include, and 27 had no FY2008 trials

  • Veterinary clinical trials: 2 included, 6 did not include, and 32 had no FY2008 trials

Ease/Difficulty of Obtaining and Reporting Data

For Question 4.1, the debriefing protocol had a question for institutions which included veterinary clinical trials in their FY 2008 response. The 2 institutions which reported veterinary clinical trials for FY2009 were the same 2 which included them in their FY2008 responses. One respondent reported that these are separately coded in their system and are easily broken out from human clinical trials. The second institution counted the few trials they conduct which are focused on veterinary medicine. The respondent stated that they did not include all trials which use animals.


Other Issues

One respondent indicated that he was not aware of any veterinary trials at the institution. He talked about using animals for human clinical trials, but said they had no dedicated veterinary clinical trials. One respondent indicated a need for a definition of veterinary clinical trials. He wanted to know what NSF is asking for in the question: a trial that involves animal subjects, or a veterinary clinical trial?





Question 5: Of the total R&D expenditures that were externally funded (all sources other than the institutional funds reported in Question 1, row e4), how much was received under each of the following types of agreements?

Data Availability

Of the 40 institutions, only 1 was not able to provide any data for Question 5.


Ease/Difficulty of Obtaining and Reporting Data

Twenty-seven institutions reported that they had codes in their systems that designated whether something was a contract or a grant. A few of those that had codes still said that it was a manual exercise to produce the data. Of the institutions that did not have contracts vs. grants coded, most said their numbers were estimates or that they were “all grants.”


Data Repository

For most institutions, the contracts vs. grants data reside in the pre-award office or in the financial system records. Some institutions described the attribute (contract vs. grant) being assigned by sponsored programs when the award is received.


Match Between Actual and Desired Response

Only 1 respondent said that the institution had no idea of the difference between a contract and a grant. Almost all of the reporting institutions identified a contract as having defined deliverables and a grant being more under the control of the researcher. The deliverable for a grant was usually a report or journal article once the project had concluded.



Question 6: What amounts of your FY 2009 R&D expenditures were for basic research, applied research, and development? Estimates are acceptable

In past survey cycles, respondents have reported percentages of overall and federal R&D expenditures for basic research. The survey revision requested total R&D expenditures broken out by federal and nonfederal sources for basic research, applied research, and development. Respondents were asked several questions about the changes to this item, including reporting dollars instead of percentages and the addition of definitions and examples to the line items. For those instances where the percentage of basic research changed more than 10% from the previous year, respondents were also asked to explain what factors accounted for the change.


Data Availability

Data availability for Question 6 data items is shown in the table below for the 40 institutions. Approximately half of the institutions reported expenditures for the newly-added development category. In comparison, most of the institutions (35 and 34 for federal and nonfederal, respectively) reported expenditures for the basic research category.


Question 6 Data Items

Number of Institutions

Reported Non-zero Data

Reported Zero

Data Not Available

Basic Research

Federal

35

1

4

Nonfederal

34

1

5

Applied Research

Federal

32

0

8

Nonfederal

32

1

7

Development

Federal

17

10

13

Nonfederal

20

8

12


The following patterns of reporting occurred:


  • 3 institutions reported “data not available” for all parts of the question (basic research, applied research, and development)

  • 4 institutions reported all expenditures as basic research

  • 1 institution reported no basic research; all expenditures were reported in the categories of applied research and development

Three respondents stated during the debriefing interviews that they were planning to add an attribute to their systems which would allow them to report for the categories of basic, applied, and development in the next year.


Ease/Difficulty of Obtaining and Reporting Data

During the debriefing interviews, almost all respondents said they do not track type of research using the classifications of basic research, applied research, and development. Six respondents described their current practices for tracking basic research and how they modified those practices to adjust to the new question.


In the comment box for this question, 7 respondents provided a text comment that they reported an estimate. Some of these respondents further explained that they do not track this information and others said they asked another office (e.g., Office of Sponsored Programs) to provide an estimate.


The debriefing discussions also revealed that most respondents paid more attention to reporting basic research for the revised survey, compared to previous survey cycles. This was due to the expansion to three categories and formatting which presented both definitions within the question and examples for each category.


Data Repository

As noted above, most respondents said that data do not exist in either their pre-award database or in their expenditures database. Of those who do track the information, several mentioned that their sponsored programs office tracks or codes basic vs. applied research. These offices assisted in responding to the question.


Match Between Actual and Desired Response

Respondents were asked to describe how they decided what to report for each of the categories. The 6 respondents who reported based on records described the following processes.


  • At 1 institution, deans and PIs review projects and determine an account number for basic vs. applied.

  • At 2 institutions that already track basic vs. applied, they reviewed projects to determine which to code as development.

  • Three institutions specifically mentioned the role of their sponsored programs or sponsored research office. These offices track basic vs. applied (e.g., at time of project award). In a separate step, these offices determined which projects qualified in the development category. One of these institutions uses a field called “purpose” in their tracking; basic and applied are just two of the possible codes for this field.

As noted above, many respondents either explicitly noted on the survey or explained in their debriefing that they used an estimation method. These respondents articulated the following strategies or rules of thumb to report data for Question 6.


  • 3 institutions which had clinical trials reported them as development; they allotted the remainder of their total to the basic and/or applied categories

  • 2 respondents described asking another office to provide a breakout, e.g., splitting total R&D into 40% basic, 40 % applied, and 20% applied

  • 2 respondents said they continued to use the percentage allocation their institutions applied in the past

  • Additional individual respondents described doing the following in order to estimate dollars:

  • Asked individual departments to review the definitions and examples provided on the survey

  • Applied the percentages reported in the past to estimate dollars for FY2009

  • Counted applied research center projects as applied research

  • Reported all of research as applied

  • Used departments to make allocations, e.g., cell biology was counted as basic research

  • Counted “most or all” as basic

  • Counted most of federal research as basic

  • Counted a project as applied research if a piece of equipment was purchased and required for use on the project


Several respondents noted that they would likely have difficulties providing accurate numbers for the three categories of R&D expenditures requested in Question 6. They cited reasons such as: (1) projects may have multiple overlapping components within them, which would make it hard even for PIs to judge/allocate the project dollars to these categories, and (2) projects may change over time, and the nature of the work would shift among the categories of basic, applied, and development.


Other Issues

Submission Issue

One respondent used the comment box of the web application to state that the website would not allow submission of the survey unless data were provided for this question. The respondent used a work-around of entering all R&D expenditures under basic research in order to submit. The institution’s responses were changed to “data not available” based on the data retrieval step.


Reporting Dollars

When asked their reaction to reporting dollars instead of a percentage for these categorizations of research and development, many respondents said that reporting either is difficult because their systems do not contain the information to report these breakouts of research type. Of the 22 respondents who specifically spoke about a preference for reporting dollars vs. a percentage, the breakouts were as follows.


  • 10 respondents prefer to report dollars;

  • 5 respondents prefer to report percentages; and

  • 7 respondents said it does not matter.

The following patterns emerged from examining respondents’ reactions to reporting dollars vs. a percentage of basic research.


  • Smaller institutions seemed to prefer reporting dollars, e.g., they indicated that they would generate dollar figures anyway in order to calculate and report a percentage.

  • Larger institutions seemed to have no preference or prefer to report a percentage.

  • Some respondents who reported percentages seemed to base those percentages on estimates vs. calculations from tracking R&D expenditures.

Interpretation of “Development” Category

Respondents were asked what types of activities might be reported as development for an institution. In response, they suggested that work related to the following might qualify as development:


  • Clinical trials

  • Pharmaceuticals or medical devices

  • A highway paving project

  • Construction of things like bridges

  • Developing biodiesel technology

  • Developing turbine equipment

One institution that conducts research on goats, such as techniques to improve goat milk production, did not consider the work to be development or even applied research. They classified all work at the institution as basic research. One respondent questioned whether work related to research training grants and research done to write a book would qualify as development.


Several respondents mentioned that work done in the following settings might qualify as development:


  • Survey research centers

  • Engineering mechanics departments

  • Departments that focus on technical work such as building things

Almost all respondents who were asked specifically whether clinical trials might be considered development agreed that they would be. However, 1 respondent from a school of medicine stated that the dean decided that the development category did not apply to them; instead, they classified clinical trials as applied research. The respondent clarified that most of their expenditures were “geared toward early stage trials” and involved monitoring blood pressure.


Reaction to Examples Provided for Categories

Twenty-three of the respondents were asked for their reaction to the examples provided for basic research, applied research, and development. Four respondents made comments that had neither positive nor negative connotations, e.g., they passed them on to another office and did not ask for feedback. One respondent suggested that a non-S&E example might be useful, as well.

Thirteen made comments that were positive, such as the following.


  • The examples were helpful for classifying departments; clear examples.”

  • The examples seemed to be helpful to those we shared them with.”

  • Definitely helpful. We do things like software research and we used to put that in applied but now we put that in development.”

  • Examples were very helpful in determining development.”

  • From my layman eyes, I couldn’t tell the difference between them half the time. But we forwarded them to the professors, and it was very helpful for them to see the nuances.”

Six respondents voiced negative reactions. Some of these respondents have no knowledge of research and did not feel comfortable with applying the definitions to make classifications themselves. Others said that the definitions were confusing and/or expressed concerns about how the definitions would be interpreted, such as:


  • They don’t really help. They’re fine, we understand them, but they really do not help. Something could start out as basic and then switch. When you have a huge variety of people looking at this and making an interpretation, it’s going to be highly subjective. And I strongly recommend that if you want to gather that information, get it from the federal agency.”

  • The definitions themselves I have concerns about. We try to make sure things are basic research as much as possible for export trade purposes and other regulatory purposes. I hesitate to put out an applied/development number on research because then the IRS begins to question whether it fits our mission for tax purposes. That’s part of the reason why we don’t distinguish between the different categories. It makes me very nervous.”

  • Even the faculty are unsure themselves if they’re doing basic or applied research.”

Explanation of Change in Proportion of Basic Research

Thirteen of the 40 institutions were asked to explain why they had a change in basic research of more than 10 percentage points from the previous year’s survey.


  • Six of the respondents said that due to changes such as the addition of the development category and the inclusion of definitions, they paid more attention to the question and spent more time to determine what to report. Several said that instead of perpetuating a previous estimate, they spent time to allocate projects to the three categories. One mentioned using the definitions to improve accuracy.

  • One institution that had previously reported 80% basic research reported that the data were not available and that the institution does not track this. He voiced a concern about collecting these data: “We’re not sure about the value of this question,” and discussed how the type of research could change, depending on the findings over time. He suggested that a better approach would be to have the federal agencies code the type of work with the award notice, in a similar fashion to how DOD codes awards.

  • Four of the respondents did not know or were not sure why their percentage of basic research changed.

  • Two respondents explained that the change was due to the nature of the work conducted during the fiscal year, e.g., more research overall or the non-renewal of a type of contract changed the research mix.




Question 7: How much of your R&D expenditures reported in Question 1 did your institution receive as a sub-recipient?

In past survey cycles, institutions have responded to this question on expenditures received as a sub-recipient. Therefore, respondents were only asked during the debriefings if they had any general comments about the question. For those institutions which had a large change in dollars received as a sub-recipient, probing focused on reason(s) for the change.


NSF uses data collected with this sub-recipient question each year in ranking tables. The 40 pilot institutions were asked to report expenditures received as a sub-recipient for S&E fields and non-S&E fields combined. All non-pilot institutions reported these expenditures for only S&E fields. Therefore, for institutions that had both a significant amount of non-S&E expenditures and a non-zero amount of expenditures received as a sub-recipient, respondents were asked whether they could supply expenditures broken out for only S&E fields.


Data Availability

The availability of data for Question 7 for the 40 institutions is shown in the table below. More institutions reported receiving federal than non-federal dollars as sub-recipients, and more institutions received funds from higher education sources than other sources. Two institutions submitted the web application with comments in the comment box stating that non-federal amounts were not available. One of the institutions entered zeros in the cells for non-federal; for the second institution, data were captured as “data not available,” as shown in the table. The smallest institutions in the pilot sample (in terms of total R&D expenditures) tended to be the ones which reported receiving zero dollars as sub-recipients.


Question 7 Data Items

Number of Institutions

Reported Non-zero Data

Reported Zero

Data Not Available

From higher education institutions

Federal

39

1

0

Non-federal

25

14

1*

From other sources

Federal

33

7

0

Non-federal

18

22

1*

*Two institutions provided a comment that non-federal expenditures were not available, but one entered zeros in the cells for these data items.


Ease/Difficulty of Obtaining and Reporting Data

Respondents were not specifically asked how easy or difficult it was to obtain data for this question. Approximately a quarter of the respondents commented that it was relatively easy to supply the data for this question, either because they already have it in their records and/or because it was zero for the reporting year.


Data Repository

Respondents were not specifically asked to explain where the data resided and whether input was required from other offices. Several respondents mentioned having codes and easy access to the data within their own expenditures systems.


Match Between Actual and Desired Response

Except for the institutions which reported that non-federal expenditures data were not available, data for this question were consistent with what the question asked for.


Other Issues

Explanation of Large Change in Dollars Received as a Sub-recipient

Reasons for changes in dollars received as a sub-recipient were explored with approximately 6 respondents. Several of these were not sure what their change was due to, but others indicated the likely cause was normal fluctuation over time – based on contracts ending/starting. One respondent explained that the question more clearly explained what expenses to include as a sub-recipient; and with their perception of the survey broadening to include categories of applied research and development, they “broadened the net of what we included” in Question 7.


Non-S&E vs. S&E Expenditures in Question 7

The volume of S&E vs. non-S&E R&D expenditures reported in Question 7 was addressed in 11 of the debriefing interviews. Seven respondents said they did not know or were not sure how much of the total they reported was only S&E expenditures, i.e., they would have to review the data to determine the amount. Of the other 4respondents, 3 guessed that most was S&E, and the fourth guessed that it would be about 75% of the total reported for the question. When asked how hard it would be to supply expenditures for S&E alone for the Question 7 ranking tables, 10 of the 11 respondents said that it would be possible or easy to do.


Question 8: How much of your R&D expenditures reported in Question 1 were passed through by your institution to sub-recipients?

Question 8 – R&D expenditures passed through to higher education institutions and other organizations – was also familiar to respondents due to its inclusion in previous survey cycles. Respondents were asked to explain how they determined the field of research to report for Questions 9 and 12 for the awards that were passed through to other institutions. In addition, any large changes in pass-through dollars from 2008 to 2009 were explored.


Similar to the explanation provided above for Question 7, NSF publishes tables ranking institutions based on pass-through data collected each year. The 40 pilot institutions reported pass-through amounts for S&E and non-S&E combined. For institutions that had both a “significant” amount of non-S&E expenditures and a non-zero amount of pass-through expenditures, respondents were asked whether they could supply expenditures broken out for only S&E fields.


Data Availability

Data availability for Question 8 is shown in the table below. Relative to receiving funds as a sub-recipient in Question 7, fewer institutions reported passing through federal funds. Similar to Question 7, the institutions ranked lower in total R&D expenditures tended to report zero passed through, for both federal and nonfederal sources of funding.



Question 8 Data Items

Number of Institutions

Reported Non-zero Data

Reported Zero

Data Not Available

To higher education institutions

Federal

29

10

1

Non-federal

24

15

1

To other organizations

Federal

25

15

0

Non-federal

26

14

0


Only 1 institution submitted a comment in the comment box for this question. That institution explained their “data not available” entries for the “to higher education institutions” row. They only track the federal and nonfederal sources; they do not track the type of recipient (higher education institutions vs. other organizations), as requested by the question. The comment also explained why the institution reported two numbers supplied in the “to other organizations” row: They used those cells to balance and get their numbers to align.


Ease/Difficulty of Obtaining and Reporting Data

As for Question 7, respondents considered it easy to report these data due to having the information available in records for prior survey cycles.


Data Repository

Respondents were not asked to explain where the data resided.


Match Between Actual and Desired Response

Only one institution supplied data that was inconsistent with what the question asked for. As explained above, that institution was not able to report for individual cells of the question. Instead, the respondent used the “to other organizations” line in an unintended way – as a spot to supply numbers to balance the survey data.


Other Issues

Field of Research for Pass-through to Other Institutions

Respondents at 18 institutions were asked how they determine the field of research to report their passed-through funding in Questions 9 and 12. (Questions 9 and 12 ask for breakouts by field, for federal and non-federal expenditures, respectively.) Seventeen of the institutions described tracking the pass-through awards in a database or report of some type; the other respondent manually tracked them.


Fourteen of the respondents mentioned the level or unit they use to track the pass-through funding. Half of these respondents described coding and tracking the sub-award based on the department that received the prime grant award. One specifically stated that: “If they [the sub] were doing something primarily different, no we don’t have a way to track that.” Three respondents said they code on the basis of the agency/sponsor. The others mentioned coding on the basis of some other entity, such as a division or a category of life sciences.


Explanation of Large Change in Pass-through Dollars

Respondents from 10 institutions were asked what accounted for a large change in pass-through dollars. Half of these respondents said they were not sure of the exact reason for their increase or decrease. Several of the others cited pass-through increases associated with a particular type of work at the institution (e.g., specific departments/programs that increased their pass-through activity for the year). Two cited increases in work with a specific type of outside entity, such as a large cancer research center or medical facilities of various types.


Non-S&E vs. S&E Expenditures in Question 8

Twelve respondents were asked whether they would be able to supply expenditures for S&E alone for the Question 8 ranking tables. Eleven of the 12 respondents said that it would be possible or easy to do. One respondent said that S&E could probably be identified but it is not tracked.




Question 9: What were your FY 2009 R&D expenditures in [field of study] funded by the federal agency sources below?

Data Availability

All institutions were able to report expenditures by source of funding and fields. These data have been reported for past NSF survey submissions.


Ease/Difficulty of Obtaining and Reporting Data

The debriefing protocol did not devote time to Question 9 since the content was not significantly affected by the redesign efforts. However, interviewers did ask respondents how their tracking of institutions matched up with the agency categories listed as column headings in Question 9. Most institutions said this was an easy or straightforward process, such as running a report based on an agency code or rolling up sub-agency codes to match the column headings.


Only 3 of the respondents mentioned needing to complete some type of manual step to produce the data for the Question 9 agency and “Other” columns. These steps were described as the following.


  • One respondent “had to do some manual work to group sub-agencies into agencies,” e.g., into the columns for DoD and Energy.

  • One respondent worked through 180 grants to determine original sources of funds and categorize into prime sponsor agencies.

  • One respondent described a process of calculating each column of Question 9 individually based on all the information used for the survey.


Data Repository

These data seem to be available to the respondents within their expenditures databases; none mentioned needing to coordinate with other offices to report these data.


Match Between Actual and Desired Response

As noted and explained below in detail for Question 10, some institutions reported funding in the “Other” category that should have been reported in the federal agency columns explicitly listed in Question 9, e.g., DoD and HHS.

Question 10: Of the amount reported for “other” federal sources reported in Question 9 (row K, column g), which agencies funded this R&D and how much of the reported amount was from each agency?

Data Availability

Thirty-four of the 40 institutions reported data for the “Other” federal sources column in Question 9 (Expenditures by Field and Source) and listed names of specific agencies and the R&D expenditures for them in Question 10. Six institutions had no other federal funding so did not enter data for the question. None of the institutions reported that the data were not available during the interviews.


Ease/Difficulty of Obtaining and Reporting Data

Almost all of the 34 respondents who entered “Other” federal sources in Question 10 said it was easy to report their data.


Two of the respondents reported having difficulties entering their data into the web application, as follows.


  • One of the respondents from an institution with no “other” funding reported that she was not able to leave the question blank when she wanted to move to a next question.

  • Another respondent had an auto-totaling issue when entering data for Question 9, so mistakenly reported the same number in Question 10. The survey was submitted with this error; it was not flagged during the data checks. (The data error was subsequently corrected so as not to over count the funding by agency.)

Data Repository

Funding source data seemed to be uniformly available in expenditures databases. Institutions track and use funding source data for their own purposes as well as to report on the annual NSF survey.


Match Between Actual and Desired Response

Although respondents said that these data were easy to obtain, the debriefing interviews revealed both some reporting errors and some inconsistencies in the ways that institutions establish and maintain their funding source records.


Reviewing the survey submissions before the debriefing interviews revealed the following types of reporting errors.


  • Sub-agencies listed in Question 10 that should have been included with an agency explicitly listed as a column in Question 9. Examples are: (1) Office of Naval Research (ONR), which should have been reported as part of DOD, and (2) Centers for Disease Control and Prevention (CDC), which should have been listed under HHS. Responses for approximately four of the institutions indicated this type of error had occurred. (Note that the number four is based on the discussion that took place during the debriefing interviews, not the actual survey response data.)

  • Sub-awards listed separately in Question 10 that should have been rolled into the appropriate federal agency column in Question 9. These were classified and reported in the “Other” column in Question 9 and therefore listed separately in Question 10. Several of these were listed by a university name or other proximal source (e.g., University of California) and some were listed as "sub-award." In one case, an institution listed “Department of Education” and “Department of Education sub-award” on different lines in Q10. Responses for approximately five institutions indicated this type of reporting error.

  • Unique “multi-agency" residual groups that should have been reported on the “Other” line provided in Question 10.

  • Reporting grants funded by the same agency in Question 10 as well as in Question 9. One institution had “Department of Agriculture” as one code and “Department of Ag” listed separately; the respondent explained that this type of error occurred because multiple staff members assigned funding source codes with a slight difference in how it was typed.

Respondents were asked to explain these occurrences during the interviews. In some cases, they realized their errors. However, several respondents said they would need to check with someone else to provide an explanation. Respondents talked about reporting “Other” in Questions 9 and 10 based on how they track and code agencies at their institution; and if given specific instruction, could change for future reporting.


Some of the institutions used the “Other” line in Question 10. When asked how many additional agencies were lumped together on this line, respondents reported:


  • Probably less than 10”

  • 30 agencies”

  • The biggest ones I broke out, but the rest I just left in k [Other]”

  • Only 2 – US Institute of Peace, HUD”

  • CIA, Nuclear Regulatory Commission, Coast Guard (respondent was not sure why it was not reported as part of DoD), and National Geospatial Intelligence Agency”

Other Issues

Two other errors were identified for this question:


  • One respondent said that he thought Question 10 was a “summary” of Question 9, so reported the same expenditure total for both questions. This may have been due to an issue with the auto-totaling feature that was resolved soon after the pilot survey was distributed. The respondent was from one of the two small institutions which completed the survey early.

  • One institution’s response showed some of the lines in Question 10 as blank, but there was also a number for the “Other” row. When asked what accounted for this odd response, the respondent could not explain why and did not know the breakdown of agencies that had been lumped into the Other category.

Respondents were asked whether they would prefer to select agencies from a provided list in the web application, or type them, as they did on the pilot version of the survey. Respondents were evenly divided in their opinions. Some cited having a list to choose from as an advantage, because it would inform them of what level of agency NSF wished them to report. Others who had only a few agencies to list in Question 10 said it was easy enough to just type them in, vs. finding their needed agencies within what they envisioned to be a long list.




Questions 11: How much of the federal R&D expenditures amount reported in Question 9, row K, column h, took place in interdisciplinary research centers at your institution?

Question 13: How much of the nonfederal R&D expenditures amount reported in Question 12, row K, column f, took place in interdisciplinary research centers at your institution?

Data Availability

Of the 40 institutions, 5 said that they were unable to produce the data for Questions 11 and 13. Twelve reported true zeros, meaning that they did not have any interdisciplinary research centers. Two of the 23 institutions that did provide a response for Questions 11 and 13 said the number was an estimate.


Ease/Difficulty of Obtaining and Reporting Data

Interdisciplinary research centers that reside in separate physical spaces (an entire buildings or a part of a building) usually have separate accounting systems and are relatively easy to report on. On the other hand, reporting difficulty arises when interdisciplinary research centers are “virtual” in that they do not reside in a particular physical space, but are spread over multiple departments or other organizational units. Each organizational unit reports for itself, and the “interdisciplinary’ aspect is lost. Another issue was that there is a proliferation of “research centers” at higher education institutions. Unless a research center has “interdisciplinary” in its name, some respondents would not have any way of knowing whether interdisciplinary research was conducted there.


Data Repository

The expenditure data for Questions 11 and 13 reside in the financial systems databases.


Match Between Actual and Desired Response

Some institutions reported interdisciplinary research as research that involves more than one department. Other institutions reported only the expenditures that occurred at interdisciplinary research centers. This suggests that the concept of “interdisciplinary” is not stable across institutions and respondents. In other words, respondents do not apply a uniform definition of “interdisciplinary” and thus what they report as “interdisciplinary” reflects fundamentally different concepts.




Question 12: What was your FY 2009 R&D expenditures in the [field of study] funded by the nonfederal sources below?

In previous years, institutions have reported R&D expenditures broken out for nonfederal sources, at the broad level collected in Question 1. With the survey revision, Question 12 required more detailed reporting -- R&D expenditures from nonfederal sources broken out by R&D fields. Therefore, in the debriefing interviews, we addressed whether the institutions were able to report the data at this level of detail, and whether they had the data in one database vs. pulled from multiple sources to report the data. In addition, for institutions that did not provide data for the institutional funds column, we probed about the reasons for not being able to report the data.


Data Availability

The table below presents the availability of totals for each funding source across the 40 institutions.


Question 12 Line Items: Column Totals

Reported Non-zero Data

Reported Zero

Data Not Available

(a) State and local government

39*

1

0

(b) Business

35**

5

0

(c) Nonprofit organizations

37

3**

0

(d) Institutional funds

33

7

0

(e) Other nonfederal sources

20

20

0

* One institution reported a negative number for state and local source.

** One institution submitted a comment with the survey stating that their system does not separate business and nonprofit; they reported all combined expenditures under business and “0” for nonprofit organizations.


Of the 40 institutions, 7 reported zero for the column of institutional funds. During the debriefing interviews, respondents from 5 of these institutions described the circumstances for reporting zero as: non-existence of faculty research grants, no institutional funds to work on research, no R&D supported by internal funding, etc.


Two institutions clarified that data were not available to report institutional funding. One clarified that direct funding of research was from departments, not from her office, so she did not have the data to report. The other realized during the discussion that she did not interpret PI research salaries paid by the institution as institutionally financed research, but she should have included them. NSF instructed that these responses remain as is in the survey database, rather than be changed to “Data not available.” This decision was made in order to maintain consistency with the reporting practices of the existing survey. (There are currently many institutions which report zero institutional funding because they are unable to track these funds.) Beginning with the FY 2010 survey, an effort will be made to determine zeroes vs. missing data for all institutions’ survey responses.


Ease/Difficulty of Obtaining and Reporting Data

Of the 34 interviews in which this question was discussed, 17 respondents said they had data readily available to report at this level of detail for all sources of funds. The other half of the institutions’ situations were as follows.


  • One institution submitted their survey with the statement “Unable to provide data” in the comment box for Question 12. In the debriefing, they clarified that they manually reviewed grants to determine the nonfederal source. They reported all funds in the row of “Other sciences” because they could not provide a breakout by the listed R&D fields.

  • During the debriefing interviews, two respondents said their institutions don’t track at this level of detail for funding sources. One respondent described going through each record to identify nonprofits and businesses in order to report numbers for the question. The other described gathering data (to determine state, local, and nonprofit) to add to his spreadsheet and then “On a few of them I went back and tried to piece it together.”

  • The others described having R&D field breakouts for one or more of the nonfederal sources, but not all.

  • Four respondents had difficulty reporting for institutional source of funding, remarking that it was the “most difficult and time consuming,” and required parsing out unrecovered indirect costs across fields.

  • Four respondents had difficulty reporting for the business and nonprofit categories, e.g., breaking them out of a larger “private” category, looking through separate records since all nonfederal funding is received through a cooperative R&D agreement.

  • Two respondents had trouble reporting for the state and local category. One of these described reporting state government in the past as “Other,” but will take steps to be able to separate it in the future.

Data Repository

The respondents who reported having the detailed data available for all sources were asked whether the data were housed in one database. Sixteen of the 17 replied that the data were within one system. The institution that reported using two databases described having to add in clinical trials through mostly a manual process.


Match Between Actual and Desired Response

As described above, most of the respondents were able to report breakouts of the nonfederal funding sources by field. Half of the respondents had the data readily available for easy reporting according to the sources as classified by NSF.


Other Issues

One institution reported a negative number in the column for state and local government. The respondent explained that “sometimes closeout of grants crosses fiscal years.” NSF advised the respondent to do this type of revision in the previous year’s report.


Question 14: Of the total amount of R&D expenditures reported in Question 1 what were the amounts for the following types of costs?

The pilot debriefing questions concentrated on two parts of the question: (1) line item b: availability of software purchases data – for both non-capitalized and capitalized software – and (2) line e: Other direct costs. Additional questions about costs included in the “salaries, wages, and fringe benefits” line item were asked of 19 respondents who participated in the later interviews.

Data Availability

Data availability across the 40 institutions is shown for the Question 14 data items in the table below. All institutions were able to report salary and other direct cost data. Based on actual submitted survey responses, unrecovered indirect cost data was not available for 8 institutions. (Note that during the debriefing interviews, 5 respondents stated that they did not track it and did not report it.)


Question 14 Data Items

Number of Institutions

Reported Non-zero Data

Reported Zero

Data Not Available

Direct Costs from All Sources

a. Salaries, wages, and fringe benefits

40

0

o

b1. Non-capitalized software purchases

23

13

4

b2. Capitalized software purchases

12

25

3

c. Capitalized equipment

35

4

1

d. Pass-through to other universities or organizations

31

6

3

e. Other direct costs

40

0

0

Indirect Costs

f. Recovered indirect costs

37

2

1

g. Unrecovered indirect costs

27

5

8

h. Total

40

0

0


Ease/Difficulty of Obtaining and Reporting Data

Salaries, Wages, and Fringe Benefits

Nineteen respondents were asked how they obtained their data for this line item. They all reported having a specific account code for salary, or separate codes for salaries/wages and fringe, which they combined to report for this line item.


Software Purchases (lines b1 and b2)

Approximately three-quarters of the 40 institutions reported having codes that provided the capability to track and report non-capitalized software, capitalized software, or both. However, as shown in the table above, many of these reported zero for one or both.


Respondents gave the following (paraphrased) explanations for reporting zeroes.


For non-capitalized software:


  • Not comfortable reporting this line item this year

  • Reporting a number would have required too much effort

  • Breaking out non-capitalized software from other direct costs would have required a lot of respondent’s effort

  • Would have required reviewing individual department purchases line by line

  • Software is not tracked

  • Software that is not capitalized counts as supplies (2 institutions)

  • Software may be part of an equipment (computer) purchase

  • Software is tracked, but institution had none for reporting year (6 institutions), e.g., budget constraints prevented purchases of software during year

For capitalized software:


  • Capitalized software is not charged to R&D accounts (3 institutions)

  • Institution’s capitalization threshold is high, so it’s rare to capitalize software (4 institutions)

  • Capitalized software comes installed on equipment and is not broken out separately in the accounting system

  • Institution does not capitalize software (3 institutions); e.g. R&D software is difficult to capitalize; cannot estimate length of useful life

  • Software is not tracked (3 institutions)

  • Software is tracked, but institution had none for reporting year (11 institutions)

Respondents who reported that data were not available provided the following explanations:


  • Software is not tracked in the respondent’s database

  • Institution does not have R&D software, so line item is not applicable

  • Institution tracks software, but none was for R&D software during fiscal year

The explanations that respondents provided for zeros and data not available show overlaps, indicating they were confused about which specific circumstances qualify as “data not available” and which circumstances should have led them to reporting zero for the sections of line item b.


Other Direct Costs (line e)

Respondents generally described other direct costs as an easy number to report. Many stated that they put “everything else” that did not fit the wording of any other direct cost line into this category. Some explained that they worked backwards: they started with an overall R&D total (an “all of the costs of the grants”), then subtracted out each of the separate types of costs that they track in their systems. The end result was the number they entered here.


Data Repository

Respondents mentioned that types of costs data came from the general ledger or the institution’s accounting system.


Match Between Actual and Desired Response

Salaries, Wages, and Fringe Benefits

Nineteen respondents were asked specific questions to determine whether the following were included within the total reported for salaries, wages, and fringe benefits:


  • Students paid with research funding

  • Tuition waivers

There was general similarity across institutions in their treatment of both of these cost categories, but in different directions. Seventeen of the 19 respondents asked these questions included student salaries/wages in their reported total for salaries, wages, and fringe benefits. One respondent excluded students and another did not clarify whether the expenditures were included or excluded.


Of the 19 respondents, 13 did not include tuition waivers in reporting salaries, wages, and fringe benefits. Several explicitly stated that they classified waivers as “other direct costs.” Four respondents were not sure whether tuition waivers were included with salaries or not and one did not explain how waivers were treated. One respondent said that tuition waivers definitely were included because the institution considers them fringe benefits.


Software Purchases (lines b1 and b2)


As noted above in the discussion of software purchases, institutions varied in their ability to track and provide meaningful reports of software expenditures, especially for capitalized software.


Other Direct Costs

When asked for specific instances of expenditures for this line item, most respondents stated the categories listed with the question, i.e., travel expenses, supplies, consulting, and, computer usage fees. A few respondents mentioned specific instances of the general categories listed in the question, such as supercomputing expenses.


Respondents also mentioned the following:


  • Tuition waivers

  • Scholarships

  • Repairs

  • Participant costs

  • Non-capitalized equipment (2 respondents)

  • Salaries and wages for resident instruction unit

  • Lab supplies (2 respondents)

  • Equipment rental

  • Postage insurance


Institutions which do not code and track software purchases separately seemed to capture these expenditures in the Other direct costs category.








Question 15: At the end of FY2009, what were your institution’s dollar capitalization thresholds (in thousands) for software and equipment?



This question was not covered during the debriefing interviews.



Question 16: For the fields of R&D below, what portion of your FY 2009 R&D expenditures went for the purchase of capitalized R&D equipment?

Of the 40 institutions, 19 provided comments on changes in capitalized R&D equipment expenditures compared to the previous year. The 19 comments fell into three main categories: (1) natural variation; (2) unknown explanation, (3) variation related to equipment grant fluctuation – some years will have more equipment grants than others.


Paraphrased comments from the institutions that focused on natural variation include the following:


  • Just the nature of research -- normal variation.

  • The difference [between the two years] is based on normal variation.

  • These expenditures [capitalized equipment] have the potential to vary considerably.

  • This is an example of peaks and valleys. Next year it may be way down.

  • This [capitalized equipment] comes in spurts – occasionally we get a grant for equipment or we get a grant that has equipment included in the budget.

  • [Significant increase in 2009] just normal variation.

The institutions that focused on the characteristics of the grant made the following types of (paraphrased) comments:


  • Probably due to the fact that we’ve gotten some instrumentation grants.

  • We received a large equipment grant in the medical school. We have a new medical school building and had a grant to purchase new equipment for the building.

  • This depends on the grants – when the expenses hit. One year we could have a major instrumentation grant and the next year not have any. A lot of our equipment comes from those kinds of grants.

  • There was a grant for equipment, but the money has not been used in recent years.

There were also a few institutions that explained the variation in capitalized R&D equipment expenditures with idiosyncratic reasons. The following comments reflect the “idiosyncratic” group:


  • One institution said that an error had occurred in their general ledger. Negative numbers had been entered. The correction was made in 2009 and this at least partly explains the jump in capitalized R&D equipment expenditures.

  • Another institution said that the university had experienced a bad financial year and the decrease in equipment purchases was a function of the overall financial constraints.






Question 17: How many principal investigators and other personnel (headcount) were paid from the R&D salaries and wages you reported in Question 14, row a?

Data Availability

Thirty-three of the institutions were able to provide data for Question 17. Seven institutions did not report number of principal investigators (PIs). Two of these 7 institutions were in the top 10 institutions when ranked by total R&D dollars reported for Question 1 on the survey, and 5 of them were in the top 20. Six of the same institutions that did not have PI data also did not report a number for “All other personnel.”


During the debriefings, respondents from 3 of the institutions that did not report numbers for this survey question explained that they ran out of time and submitted the survey before completing this question. One respondent reported that the data were available, but did not want to ask for an additional extension in order to do so. Another said that the required process – running a separate report for each grant – was not possible due to a staff shortage in the payroll office (where the data reside). The third respondent reported not having the time to connect a grant code to the personnel data by the survey deadline.


Across the 33 institutions that reported numbers for the question, there was a wide range of research personnel, as follows:


  • For PIs, the number ranged from 3 (at two of the smaller institutions) to 5,612.

  • For “All other personnel,” the numbers ranged from 0 to 13,443.

  • The total number of personnel paid from R&D salaries and wages ranged from 7 to 17,555.

Ease/Difficulty of Obtaining and Reporting Data

Most respondents described working with two separate databases in order to respond to the question. The process generally required running reports from both a Human Resources (HR) database that lists – by title -- all personnel paid from a research grant account, and their research expenditures database or general ledger.


Many respondents mentioned requiring assistance from another office, such as Human Resources or Information Technology, in order to produce the reports. Many described completing a tedious process to identify numbers for personnel, once they had reports from their databases. Some mentioned reviewing lists of grants to identify the PIs. Others reviewed lists of staff to identify which staff were PIs vs. “All other personnel.” At some of the smaller institutions, the respondents reviewed individual grant files because there were so few and/or they could easily identify personnel based on familiarity with the nature of the contracts and grants.


Data Repository

As noted above, most respondents indicated that personnel data were maintained in a human resources database. These databases are separate from the expenditures database; respondents completed some type of electronic crosswalk or manual process to identify staff paid from the salaries and wages amount reported in Question 14, as instructed by the wording of Question 17.


Match Between Actual and Desired Response

One institution reported 65 PIs. In the debriefing, it became apparent that the institution had reported Full-Time Equivalents (FTEs) instead of headcount. The respondent relied on different departments to identify their personnel, then added these counts together. When asked to estimate the approximate headcount for the 65 FTEs, the respondent responded that it would be about 300, including contractors.


Issues that surfaced during the debriefing interviews indicated a number of inconsistencies: in interpretation of what to include when reporting the two numbers, in the ways that institutions track and make data available, and in the ways that respondents worked through their raw data to derive counts or estimates. The following are specific issues that respondents mentioned.


  • Some institutions do not have a code for PIs, so they used an alternative. For example, two institutions reported all “faculty” as a close approximation to PIs. These are likely to be over counts of PIs.

  • Several institutions mentioned that they included administrative staff in their count for “All other personnel.” Several others explicitly stated that they did not include administrative staff, e.g., because they did not spend much time on grants.

  • One institution could not distinguish between paid and unpaid students.

  • While completing the survey, several institutions had asked NSF whether to include students. Based on the affirmative response, they did include students paid from grants.

  • Several institutions were unsure how to count co-PIs within their headcount – as 1 or 2. They tended to report based on how their systems handled these contracts; e.g., if their system accepts only one PI per contract, they reported one PI.

  • One institution provided a partial count; they could only report personnel from their research institute. The respondent thought that the institution might be able to report more accurately in the future.

Other Issues

Approximately three-quarters of the respondents were asked the following three questions about how they counted research personnel in specific contexts.


  • Were personnel who charged to more than one grant included in their count?

Responses to this question indicate that there were inconsistencies in how institutions keep records and how respondents were able to retrieve and report the personnel data. During the debriefing interviews, some of the respondents specifically mentioned that they de-duplicated their lists, for example, eliminating any PIs listed twice due to working on two or more grants. However, four of the respondents described processes that indicated they had likely double-counted personnel. Another respondent said he was not sure whether his count had been de-duplicated. One respondent pointed out that the same person could have been counted as a PI and also in the “All other personnel” category if working on additional grants with other faculty PIs.

  • Were PIs who don’t charge salary to a project included in their count?

Responses to this question again indicated inconsistencies in how respondents reported the personnel counts. The question asked for personnel paid from the R&D salaries. Most respondents seemed to follow that direction based on their answers to this question. Sixteen respondents indicated that if PIs did not charge time to a project, they were missed in their count. Two respondents were not sure if their counts missed any non-charging PIs. Two respondents described a process where they counted all PIs listed on awards (vs. who charged to a project). Three participants responded to the question by saying that if non-charging PIs’ time had been classified as cost-sharing, they could track and include these individuals in their personnel counts.

  • Does the institution have any research PIs who don’t charge to a project during a fiscal year?

Eighteen of the 25 respondents who were asked this question discussed cost-sharing of PI time. The respondents seemed to use the term cost-sharing inconsistently. They said that cost-sharing either did occur (15 respondents) or was possible (3 respondents) at their institution. There was some uncertainty about whether PIs might not formally charge to a grant during a year (e.g., due to being paid for teaching, not research). The extent of cost sharing and the ability to track it varied considerably across institutions. Several respondents explicitly said that they do not have a way to count personnel who cost-share their time, while others mentioned having specific cost-sharing accounts to track these hours. Several respondents indicated that departmental research accounts may cover some of the PI time not charged to grants.



Question 18: Of the headcount reported in Question 17, column 3, how many are postdocs, that is, Ph.D. researchers working in temporary positions primarily for training in research?

Data Availability

Of the 40 institutions, 34 reported data for the postdoc question. Of the 6 institutions which did not report a postdoc number, two cited time constraints as the reason for not providing an actual count or an estimate. Eleven institutions reported 0 postdocs; these were ranked in the lowest 15 of the pilot institutions based on R&D expenditures.


Ease/Difficulty of Obtaining and Reporting Data

One respondent did not understand the meaning of “postdocs.” In general, respondents from the smaller institutions easily reported that they either did not currently have any postdocs or that their institutions do not have postdoc programs (e.g., as primarily an undergraduate institution, postdocs are not a focus). One institution reported zero on the survey, but the respondent stated during the interview that the institution now had two “quasi-postdocs” and had had one postdoc at the time of the survey, based on “what’s between my ears.”


The institution which provided a partial count for PIs in their research institute reported only one postdoc, also in their research institute. However, they discussed having over 100 postdocs employed for the “residential instruction side” for FY2009 but that these data were not available this year. The respondent said that considerable additional programming would be needed to report this number in the future.


Data Repository

Respondents mentioned using numbers supplied primarily from human resources offices. One respondent reported that their Sponsored Programs office had a mechanism for tracking postdocs in their system.


Match Between Actual and Desired Response

Respondents were asked for their reaction to the definition of postdoc provided within Question 18. Several respondents said that other offices assisted with answering this question, so they did not have an opinion about the definition or its match with the use of the term at their institution. Approximately a third of them were generally positive, saying their institution’s definition was a reasonable or close match to the survey definition. Three respondents said that they did not think about the definition but instead just reported the numbers associated with their institutions’ code(s) for postdoc.


Interviews revealed problems that institutions had with the postdoc definition. Four respondents took issue with the word “temporary,” saying that their postdocs were not considered to be temporary. One respondent interpreted the question more broadly, saying that he thought the question was “…asking for temporary positions beyond postdocs as well.”


Four respondents also pointed out that their institutional definitions of one of the postdoc title examples -- “research associates” -- did not coincide with the provided definition. A research associate might not have a Ph.D. In these cases, respondents were not sure whether to include research associates in the count.


Other Issues for Personnel Questions

Respondents were asked if they had any suggestions for improving the instructions for the personnel questions. Their suggestions included:


  • For postdocs, specify not to include graduate research assistants.

  • Provide examples of “All other personnel” – whether this means a specific group or any non-PI that charged to an R&D account.

  • Specify whether or not to include students.

  • Provide an instruction about whether to count PIs once regardless of how many grants they charge to.

  • Provide an instruction about whether to count co-PIs as 1 or 2.

  • Specify for PIs: total headcount vs. per grant.

  • Specify whether to include PIs if they are not paid from grants.

When asked whether it would be easier or more difficult to report FTEs in place of headcount, over 30 respondents said that FTEs would be more difficult to report. Several mentioned that they are currently reporting FTEs for ARRA purposes.

Question 19: How many R&D proposals (S&E and non-S&E) were submitted by your institution to government agencies, foundations, or other funding sources outside of your institution in FY 2009? Include proposals for grants and contracts and any other documents or actions that were used to apply for R&D funding

Data Availability

In general, the larger R&D institutions had systems in place that allowed them to answer this question in an automated way. These institutions wrote a query or ran a report. The smaller R&D schools had to engage in manual processes. The larger R&D schools reported the least problems with this question and the smaller institutions reported the most problems. In general, this question seemed to work and even the institutions that had to engage in a manual process still seemed to understand the question. There were virtually no comprehension problems associated with this question.


Thirty-nine of the 40 institutions provided an answer for this question. The 1 respondent who did not provide a number wrote a “zero” and added a comment that this information resided outside of their systems and they did not have access to this information. In the debriefing, it was established that this was not a “zero” but actually “data not available.” Three other institutions also provided comments. Their comments focused on what the numbers included and how the numbers were obtained.


The 1 institution that could not provide the number of proposals was a small liberal arts college with mostly non-S&E R&D expenditures.


Of the 39 institutions that could provide the number of R&D proposals, 5 could not report how the R&D proposals were tracked or how the number had been produced. The number had been given to them by a different department and they had not questioned how it had been obtained.


Ease/Difficulty of Obtaining and Reporting Data

A number of institutions reported somewhat idiosyncratic circumstances regarding R&D proposals.


  • One institution reported that they had to ask their departments to provide the number of proposals and they aggregated the individual numbers to arrive at a total. This institution also had an issue of some research being conducted without a formal proposal process. Adding further complexity, some of their R&D is funded by purchase agreements that are tracked under a different accounting system than the standard R&D projects.

  • Another institution said that their institution had a pre-award database, but R&D awards were not coded separately. This institution reported everything in the pre-award database, even though they were aware that some of the proposals were for “service” tasks (for instance, lab analysis). This institution understood that they were over-reporting their proposals, but had no way at present to correct the reporting error.

  • A respondent at a small private institution said that he reported from memory (30 proposals for 2009).

  • Another respondent interpreted Question 19 as asking for S&E proposals only. This respondent’s institution did not have any S&E proposals, so he reported zero. The respondent corrected this by later reporting 9 R&D proposals.

Data Repository

Twenty-six of the institutions reported that R&D proposals were something that they tracked and they had systems to produce the data.


Four institutions reported that obtaining the number of R&D proposals was a manual task. The proposals were tracked in an Excel spreadsheet and counted manually.


Match Between Actual and Desired Response

Seventeen of the 40 institutions commented on the definition Question 19 provided for R&D proposals. The institutions had a basic understanding of what an R&D proposal is and this in general matched the HERD’s definition.


  • One respondent commented that anything that goes out from the institution asking for money is a proposal.

  • Another respondent said that anything that comes through their sponsored programs office as a request for R&D funding is counted as a proposal.

  • Only one respondent commented on the clarity of the definition and this respondent said that “the definition was very clear.”

  • Another respondent wondered how clinical trials should be reflected in the number of R&D proposals now that they are explicitly included in the survey. Clinical trials are not always initiated with a proposal submitted by the institution.

Question 20: How many R&D projects in both science and engineering (S&E) and non-S&E fields were AWARDED to your institution in FY 2009 from the sources below and what were their dollar amounts?

Data Availability

Overall, institutions provided an answer for Question 20. Of the 40 institutions, only 2 did not provide survey responses for Question 20. Of the 38 institutions that provided data, 22 commented on how they identified R&D awards. Respondents seemed to be able to identify awards that were for R&D projects.


  • Of the 22, 18 institutions indicated that they conducted a database search to provide the number of R&D awards. For these institutions, the proposal and awards databases are linked.

  • One institution said that “awards were identified by the departments they were submitted through” and another institution said that information on awards was provided by another office.

  • Another small institution suggested that reporting the number of awards was a manual task.

  • Only 1 institution commented that they had R&D projects and expenditures that had not originated with a proposal. This institution talked about the lack of a “paper trail.”

  • The concern about how to include clinical trials voiced in Question 19 was reiterated regarding how to count clinical trial projects in the awards section.

Ease/Difficulty of Obtaining and Reporting Data

Most of the institutions were able to query a database to retrieve the information. A few institutions reported using a manual process to produce the numbers. This is discussed in greater detail below.


Data Repository

The data often resided in a database outside of the financial system. The awards data often resided with the sponsored project office or the pre- and post-awards office. This is discussed in greater detail below.


Match Between Actual and Desired Response

There were a number of issues discussed related to how institutions defined awards and award amounts. Much of this has to do with how multi-year awards, contingent funding or other conditions related to the award.


Multi-Year Awards

The institutions were asked how they counted multi-year awards. Four of the institutions made no comment about their treatment of multi-year awards. Of the 36 that commented, 7 said that they “did not know,” “had to ask someone else,” or simply made an extraneous statement. The twenty-nine remaining institutions fell into two groups: (1) 18 institutions reported only the funds released to them during the given fiscal year and (2) 11 institutions reported everything they expected to receive regardless of funding actually released.


Group 1 Comments

  • One respondent said that multi-year awards were reported only to the extent that they were “obligated.” Awards that were not obligated did not get entered into the financial system and thus were not reportable. To report on “non-obligated” would mean altering the financial system.

  • Another respondent mirrored this comment by saying that all they could report was a single year’s worth of funding on multi-year awards. This respondent also said that reporting any other way would require changes to their financial system.

  • Another respondent said that they used the “annualized budget” and thus reported the amount received or budgeted for the current fiscal year. Reporting the total amount for multi-year awards would be misleading because it would overstate the actual annual amount (i.e., amounts intended for a number of years would be reported for a single year). NIH reports on an annualized basis, so annualizing their budgets makes them comparable to NIH.

  • Another institution said that they had reported only funds received – not future amounts, which are not always specified.

  • Another institution said that they reported only the current amount received, not future years. Budget cuts can be made to the grant, so the total amount is never guaranteed.

Group 2 Comments

  • One respondent said that they reported the way the “second bullet” instructed them to, but they were not completely comfortable with doing it that way.

  • Another respondent said that they reported the full amount of the award, regardless of the number of years it was meant to cover.

  • Another respondent said that they report the full amount for multi-year awards. For example, one million dollars over two years is reported as one million dollars in year 1. This reflects the way their databases are set up.

  • Another respondent said that if they get a $150 million dollar award over three years, they would count the entire amount, not $50 million per year.

  • Another respondent said that he/she used the total on the award notice, not just the year 1 funding. However, it was noted that some agencies promise money for multiple years up front and other agencies only guarantee year 1 of funding.

Did Institutions include Renewals?

Of the 40 institutions, 21 commented on whether they included awards with renewal years.

Three of the 21 respondents who commented said that they did not know or needed to ask someone else. Four institutions said that they did include awards with renewal years. Eleven institutions reiterated the “annualized” principle of reporting explained above and either stated or implied that they had not included renewal years. Three institutions reiterated that new awards were counted, but continuations were not counted.


The 4 institutions that reported including the renewal years made the following paraphrased comments:


  • Those [with renewal years] are more the exception, but are included.

  • If the award letter says $150 million over 3 years, we would count the entire amount in the year it was awarded. [This statement was reiterated for this probe].

  • For multi-year awards, they used the total on the award notice, not just the year 1 funding.

For the 11 institutions which used the annualized principle of reporting or did not include renewal years, the respondents made the following types of paraphrased comments:


  • We did not include recommended funding for future periods.

  • We reported on the annual award amount received during the fiscal year.

  • If something is incrementally funded over 3 years we count the increments when they’re actually received versus at the start of the project.

  • Only the dollar amounts for that year, not amounts from previous years.

  • Only if the renewal year was active and had activity in the reporting period.

  • Each year’s amount would be reported separately for each year.

One respondent used the term “released” and said that they counted the amount that was released. Another respondent said that it was arbitrary. This respondent used the budget period to decide what to include. Thus, this respondent reported an annual amount.



How Did Institutions Handle Incremental Funding?

The set of probes used in the section on incremental funding changed on March 10. The revised probes more precisely aligned with the way respondents talked about incremental funding. The revisions reflected changes to the way the probes were delivered, not the content of the probes. Nevertheless, the change in probes makes it difficult to get exact counts of how many institutions had particular types of problems. The interviews clearly indicated significant issues with how institutions understood and handled incremental funding. Here is a sample of some of the issues that emerged.


Respondents discussed different ways of receiving R&D funding—that is, different granting agencies use different procedures when making awards. Some granting agencies make an award for a number of years (often 3 or more years), but the funds are released or conferred on an annual basis. Each year of the award has an “award” and a budget. Other agencies grant awards for shorter stretches of time such as 2 years, but these granting agencies release or confer all of the funds at the start of the award. Another situation was described by a respondent at a small institution who said that it was more common that an award was given out in two parts – one part at the start and the remainder when the report was delivered. This difference in the way granting agencies packaged the awards and released the funding seemed to complicate the meaning of “incremental funding” for the different institutions.


When probed whether awards with “incremental funding” were included, 8 institutions said “yes” and 10 institutions said “no.” Six institutions said that they did not know, would get back to us, or simply could not provide a definitive answer. Of the 10 that said “no,” 2 meant that they did not have anything with incremental funding – thus there was nothing to include.


The following paraphrased comments are representative of the 8 institutions which included incremental funding:


  • If it was awarded in 2009, even if there were ‘out years,’ it’s reported in this number.

  • For multi-year awards, the total on the award notice was reported, not just the year 1 funding.

  • Regardless of the actual cash situation, if I received an actual award letter and it said you’ve been granted this money; I included it.

  • If award document says there is a total of $100,000 with $50,000 up front and $50,000 after the report, we counted the full $100,000.

  • Yes, I included awards with incremental funding.

The following paraphrased comments are representative of the 10 institutions which did not include incremental funding:


  • I only counted the amount that was awarded for the particular budget period included in the parameters of my search.

  • No, not included here. If it’s incremental funding, which does not start until the next FY which you can’t draw down or use in that FY, we would include it in the next year.

  • We would get one year at a time because we don’t have authorization for expenditures, we just count what is released and approved for funding.

  • No, I don’t think we had any during that timeframe.

  • Each year’s amount would be reported separately for each year. … DoD would give us the money one year at a time; most federal agencies, we’d get it in a lump sum.

  • Incremental funding not included.

Question 21: How many of the R&D project awards reported in Question 20 involved interdisciplinary R&D and what was their dollar amount?

Data Availability

Of the 40 institutions, 19 provided a numeric answer to Question 21. Of the 21 institutions that did not provide a numeric answer, 10 reported a zero for interdisciplinary awards and 11 left the question blank.


Twelve institutions wrote in the comment box for Question 21. Eight of these 12 wrote a comment that explained that these data were not currently available. The other three of the 12 explained how they had arrived at the number that they reported. These three made the following types of comments:

  • The number and dollar amount represents only awards made to formal interdisciplinary research centers. Awards made to a single discipline are often conducted by multiple disciplines, but we have no way to capture at the award level.”

  • Estimate based on Questions 11 and 13. Number of proposals determined by ratio; interdisciplinary funds: total funds: Interdisciplinary awards: total awards.”

  • Proposal data was derived from a report in our pre and post award grant and contract management system (COEUS). Those proposals related to R&D grants and contracts where first identified, then those proposals with PIs and co-PIs from more than one DRI Division were considered interdisciplinary.”


Twenty-one institutions did not provide an answer to Question 21. On the questionnaire, these 21 institutions either provided a zero response or noted in the comment box that the data were not available.


In the debriefing interviews, these 21 institutions explained that (1) their response was truly a zero, meaning that they did not have any interdisciplinary research, (2) the data were unavailable, or (3) they had struggled with or did not understand the definition of interdisciplinary.


Institutions which reported no interdisciplinary research tended to fall below the median dollar value of total R&D expenditures. Paraphrased comments from these respondents include the following:


  • It’s something that we have not yet done …. For years, it was either one department or another – not a combination. It may be changing as we hire new faculty who are more interdisciplinary-trained.

  • We don’t have any crossing or as you call it interdisciplinary research projects going on.

  • We looked at the awards to determine [interdisciplinary] … We didn’t find any awards for 2009.

  • We didn’t find anything that fit … don’t have interdisciplinary research centers … assume not much, if any, interdisciplinary research.

  • The data are not available, but to the best of my knowledge, we don’t have any right now.

The largest group was the one what said that they did not have the data available. Nine institutions reported that they did not have the requested information available. These institutions made the following types of comments:


  • Data are not available. We don’t track interdisciplinary R&D. We don’t track project/awards that cross departments or disciplines.

  • Data not available. We haven’t come up with a way to track that kind of thing where you could give credit for the award or project. It’s 100% in a department.

  • It’s not in our data. We have a form the PI fills out at the proposal stage. Could we ask it? Yes. But we are trying to cut back, not increase, the reporting burden on PIs.

  • Data not available. There is no good definition that captures every possible case. The concept is hard to operationalize.

  • It is not a characteristic we track. There is no way for us to determine if anything we’re running is interdisciplinary.

  • Interdisciplinary takes place, but it’s not tracked. Pre-award could track it.

  • Unable to answer. The institution would have to review every grant and that was beyond what they could do for the survey.

  • We have nothing in place to track it.


Ease/Difficulty of Obtaining and Reporting Data

Participants from the 4 institutions that reported interdisciplinary based on research centers made the following (paraphrased) comments:


  • We counted only awards made to interdisciplinary research centers. We don’t have the ability to track interdisciplinary research that happens at the department level.

  • We counted only awards that were clearly for multiple investigators. Most of these were based in research centers or institutes, which tend to have interdisciplinary research.

  • We didn’t look at interdisciplinary research if it took place within the same department. … We looked at institutionally recognized research centers as the basis for interdisciplinary research.

  • We included awards that were made to our interdisciplinary research center … If we had more than one center, we might not have been able to answer.

The 8 respondents who said reporting entailed manual review made the following types of paraphrased comments:


  • One respondent said that they looked at the grants or contracts that had accounts for more than one division and counted them as interdisciplinary.

  • Another respondent said that the number of awards was small enough that they could look at a list of current grants and pick out the ones that met the definition. The respondent reported that she went through the grants one by one.

  • Another respondent said that his institution allocates percentages of the awards across multiple departments. He looked at the award received and saw where faculty crossed different schools.

Data Repository

Of the 19 institutions which provided a survey response for Question 21, only 3 said that they already had systems in place that coded and tracked interdisciplinary research. Respondents for these 3 institutions had the following comments when queried about whether they tracked interdisciplinary research.


  • One institution said that it was coded at the pre-award level.

  • Another institution said that the proposal routing form captures this information.

  • Another institution said that they capture it across organizational units, but not at any other level.

Match Between Actual and Desired Response

When probed about how they answered Question 21, 4 of the 19 institutions that provided an answer said that they reported based on interdisciplinary research centers or research centers where most of the research was interdisciplinary in nature. Eight of the institutions said that it was a manual task to determine the number of interdisciplinary awards; they reviewed each award to determine whether it was interdisciplinary. One institution said that the number they reported was an estimate and another institution said that he conferred with a vice president who made the determination. Another institution said that they simply knew the number because they do not have a lot of awards. The other 4 institutions either made no comment or said that they needed to follow up to answer the probe.


Thirteen institutions were unable to provide a survey response and said that they were unsure of what was meant by the question, struggled with the definition, did not understand the definition, or did not know how to find out about interdisciplinary work at their institutions. Respondents from these institutions made the following types of paraphrased comments:


  • We struggled with what you meant by interdisciplinary and what to include. … We don’t have a definition of interdisciplinary because we have never captured it.

  • Another respondent said that they were unsure whether they had anything to report. This respondent could not say whether their zeros were true zeros or data unavailable.

  • Another respondent was unsure whether Question 21 applied to his institution. He read the definition of interdisciplinary and did not understand what it meant.

  • Another respondent was unsure whether they had any interdisciplinary and had to consult with the Director of Faculty Research.

  • Another respondent said that she would have to follow up and could not say anything about interdisciplinary research.





Question 22: Of the total R&D awards reported in Question 20, how many were collaborative awards with other academic institutions and what was their dollar amount?

Data Availability

Of the 40 institutions, 11 provided an answer to Question 22. Of the 29 institutions that did not provide an answer, 17 reported a zero for collaborative awards and 12 reported that the data were not available.


Ease/Difficulty of Obtaining and Reporting Data

This was not an easy question to answer, even for the institutions that provided a survey response. Two of the institutions that provided a numeric response said that they had difficulty understand the question. These institutions made the following comments:


  • I had a great deal of difficulty with the exclusion. The bullets seemed to be contradictory. We have a field in our database that tells us when an award either contains a subrecipient or if we pass-though. We refer to those as collaborative projects.”

  • This one was difficult to answer. The only place where we have collaborative awards was through NSF, so we ran a query on the number of collaborative proposals and awards. If there were other collaborative awards with institutions, those aren’t included here.”


Data Repository

When probed about whether or not they track collaborative awards, 24 institutions said that they do not track collaborative awards. The institutions that provided the an answer to Question 22 either estimated based on their knowledge of the institution’s grants or were able to produce an answer by querying their databases. The majority of institutions did not track collaborative awards and did not have an attribute in their database to indicate collaborative awards.


Match Between Actual and Desired Response

Seven respondents expressed some confusion about the definition of “collaborative.” These respondents thought of sub/prime relationships (pass-through funding) as collaboration. Some of these respondents were very confused about what “collaboration” could be if not a sub/prime relationship. Twelve institutions reiterated the NSF definition of collaborative as two primes, two budgets, etc. They, nevertheless, commented that there were very few projects that were awarded in this way. A few of the institutions identified the NSF grants that have “collaborative” in the title. NIH was also mentioned as the other agency that granted collaborative awards (two independent primes).


Other Issues

Thirteen respondents provided comments in the comment box for Question 22. Two of the 13 who commented were from institutions which provided a survey response to Question 22. These 2 respondents provided an explanation of how they identified the collaborative awards that they reported. The other 10 institutions simply wrote comments that the data were not available.



Question 23: General comments about hours to complete and change the survey

With only a few exceptions, completing and submitting the revised survey took pilot test respondents longer than it had in previous years. Only 5 institutions reported that the survey took the same amount of time to complete as last year; these institutions were across the spectrum in terms of total R&D. Of the institutions which reported burden increases, the total burden ranged from an hour or two to 300 hours, and the average burden increase ranged from 25 to 40 hours. This was an estimate for many institutions; respondents noted that they did not previously track their hours and could not recall precise burden hours for previous years.


The estimated response burden for the revised survey was 80 hours. Respondents were asked to separately report their effort on the survey into time spent “for new systems/programs” and time spent “for response preparation.”


Across the 40 pilot institutions, actual response times were lower than the estimate, based on Question 23 data as follows.


  • Survey completion time: mean of 72 hours; median of 60 hours

  • Time for new systems/programs: mean of 23 hours; median of 8 hours

  • Time for response preparation: mean of 49 hours; median of 40 hours

Two institutions reported over 200 total hours (260 and 300) for survey response. These were outliers compared to the other pilot test institutions. Excluding burden figures for those institutions, the overall and component burden statistics were as follows.


  • Survey completion time: mean of 61 hours; median of 56 hours

  • Time for new systems/programs: mean of 18 hours; median of 7 hours

  • Time for response preparation: mean of 42 hours; median of 37 hours

Respondents cited three main reasons for the burden increase. The first, cited by about a dozen respondents, was the need to involve other offices in the response process. New questions about proposals and awards prompted many respondents to ask a pre-award office for information. Additionally, several pilot test institutions consulted with academic departments for specific information. The second reason for burden increase cited by about 15 respondents was the need to develop new queries. Queries were developed either by the respondent or by consulting with IT/accounting software personnel and were used to pull data in new ways from financial systems. The third reason, described in some fashion by 39 respondents, was the general expansion of the survey to cover more topics.


While institutions with the largest amounts of total R&D generally reported the largest total burdens in comparison with last year’s survey, these institutions did not show a consistent pattern of increased burden. There were both large and small institutions with large and small increases in burden. Twelve institutions, both large and small in terms of total R&D, indicated that they expected that completing the survey next year would take less time. As the respondent from one large institution noted, “Our staff spent some time documenting their processes and procedures for each question, so next year we will have a road map for compiling data.”


Time-Consuming Questions

Respondents were asked which questions required the most time to complete. Respondents reported a wide variety of old and new questions. Every question except for one (Question 15) was named by at least one institution. Question 6, the breakdown of R&D expenditures by basic research, applied research, and development, was the question cited by the most respondents (15). Many noted that they did not track their expenditures data this way and had to manually code their existing grants and contracts into one of the three categories. One institution with a large amount of R&D reported that: “The extra information on grant purpose and type are in a different database that [another office] maintains.” The respondent had to connect that database with the one they were using in their office.


Each of the questions 17 through 22 were cited by 4 to 8 institutions as requiring more time than others. Questions 17 and 18 often required respondents to contact the human resources department about PI and postdoc headcounts. Questions 19-22 required information from a proposal/pre-award office or database.


Question 14 on types of R&D costs was cited by 11 respondents as being particularly time consuming. As the respondent from a smaller institution pointed out: “Previously we did not break it [total R&D expenditures] down for the cost elements.”


Questions 9 and 12 were also cited by 11 respondents as requiring more time than others. The breakdown of R&D field by each source was difficult for one smaller institution that noted, “We keep this by type of sponsor but not by field within type of sponsor.” Another respondent said of these two questions, “First you’re cutting it up by federal and nonfederal and then by all the different fields within those categories. Then you’re just making sure it balances, it’s a tedious thing that took a lot of time.”


Ten institutions cited various aspects of Question 1 as time-consuming. A respondent from one of these institutions specified reporting nonprofits, 2 respondents specified institutionally-financed organized research (Q1e1), and 3 specified cost sharing (Q1e2), as confusing and time-consuming. Four respondents said that Question 1 overall was time-consuming.


Of the other items, Questions 2, 3, 4, 5, 7, 8, 10, 11, 13, and 16 were named by 3 or fewer institutions as time-consuming. Generally their reasons were related to institutional idiosyncrasies.


Offices Involved in Responding to Survey

Part B of Question 23 provided spaces for respondents to list the institutional offices involved in completing the response and the number of hours the offices spent for new systems/programs and for response preparation. During the debriefing interviews, respondents were asked to describe the assistance required from these other offices, which questions these offices assisted with, and how easy or difficult it was to work with the other offices to complete the survey.


Number and type of offices involved in response. The table below shows the number of offices that were reported as assisting with the survey.


Number of Offices Involved in Response

Number of Institutions

1 only

12

2

14

3

10

4

4


Across the 40 institutions, respondents’ own offices represented a mix of financial and research responsibilities, with names such as Grants and Contracts, Research Administration, Office of Sponsored Programs, Financial Reporting, etc. Respondents listed the following types of assisting offices.


  • Research or grants, such as Sponsored Programs office

  • Business/Financial, such as Accounting, Budget, or Controller office

  • Information Technology office

  • Research foundation or institute

  • University administration, such as Dean or Provost office

  • Specific schools or colleges within the institution, such as Engineering or Arts and Sciences

One of the institutions counted in the row for two offices listed “Academic departments (20)” rather than listing the departments individually. Two other institutions also indicated that departments provided assistance, e.g., “Other departments” and “departments at [institution acronym].


Other Offices Assisting with Response

The table above shows that 28 of the 40 institutions involved two or more offices in the response. These respondents needed assistance from specific types of offices to obtain and report data for one or more of the new survey questions and revised survey questions. The specific questions they needed assistance with varied on the basis of their own office affiliation within the institution and therefore, the types of data they have ready access to. Several respondents mentioned that staff representing an administrative office or sponsored programs coordinated and verified responses or the entire survey before submission.


Respondents mentioned needing assistance from the following types of offices for the new questions listed.


  • Proposals and awards (Questions 19 - 22):

  • Sponsored Programs/Projects Offices, offices with pre-award responsibilities

  • Research (e.g., Division of Research and Development Administration)

  • IT personnel within a sponsored programs

  • Grants and Contracts Office

  • Financial Systems

  • Information technology

  • Personnel (Questions 17 and 18):

  • Grants and Contracts

  • Sponsored Programs/Accounting

  • Information technology

  • Interdisciplinary research (Questions 11 and 13):

  • Research office

  • Grants and Contracts (Question 5):

  • Division Administration Office

  • Grants Office

  • Research office

  • Types of costs question (14):

  • Financial Systems

In some cases, respondents needed help with one or more existing questions that were modified for the HERD survey. These respondents mentioned the assistance of the following offices in calculating and reporting data such as expenditures of certain types and sources of funding.


  • Equipment and capitalization thresholds: finance group within sponsored projects, controller

  • Internal sources of funds: pre-award office

  • Basic Research, Applied Research, Development: academic departments, research office or research services; provost’s office

Working with Other Offices

All respondents who commented on the process of working with other offices to complete and submit the response had positive reactions. Some said that the process worked smoothly because they have a good rapport with the other offices, due to collaborating on this survey or other efforts in the past or on a continuing basis. Others cited working within the same physical facility or working under the same boss as the reason for cooperation.


Interpretation of “New Systems/Programs” Heading

Twenty-two of the institutions commented on their interpretation of the headings of Part B of Question 23. There was some variation in the way institutions interpreted “new systems/programs.”

All respondents interpreted the term to mean the creation of something new to produce the survey data requested. However, some respondents interpreted new systems/programs to mean writing new reports or setting up new spreadsheets. Others thought that new systems/programs meant altering or enhancing the computing programs; this is something beyond the ability of the accounting staff and would require a programmer or an IT specialist.


Paraphrases of the respondents’ comments include the following.


  • It’s not just new programs; it was a modification of an existing report. It was an adaptation or writing a new script for using an existing tool. A brand new system would not be necessary [to produce data for the new questions].

  • In general, we just linked to a database that already existed. We didn’t really create anything.

  • What was reported was an estimate of the time it would take a programmer to set things up – to create the necessary queries for the report.

  • Our level of effort reflects report development. We are not going to have a new system for this.

  • We didn’t really set up any new programs … for us, it is a lot of Excel spreadsheet compiling.

  • We reported a zero because we did not implement any new systems or programs. The only thing we will be doing is updating our database.

  • We had to add some fields to the report, but the fields were already in the main database.

  • We didn’t do anything to a computer program; we used a computing tool for our systems/programs. These are ad hoc tools that we can use without the assistance of a programmer.

  • Anything that I did was linked to existing data sources. We didn’t really create anything.

  • We did not really develop any new systems or programs. We did not change any computer operations on this. We did it pretty much by hand.

  • We set up new spreadsheets, but didn’t do any programming. We might do that in the future.

  • [I] don’t think she [other staff person] spent any time on new technology. She used the existing data warehouse and Excel, so no extensive programming was needed.

Several respondents expressed a lack of clarity about what “new systems/programs” meant. These respondents made the following types of remarks:


  • [the question] was confusing. I thought it meant time spent pulling the data needed to complete the new questions.

  • [I have] no idea what was meant. They have grant information in Excel. No new computer programs.

  • I’m not sure what you meant by new systems. I interpreted it as what I had to do to answer the new questions – did I have to write new queries, etc.

Interpretation of “Response Preparation” Heading

Seventeen respondents commented on the term “response preparation.” These institutions demonstrated a uniform interpretation of response preparation. Respondents said that response preparation meant everything from compiling and analyzing the data to entering the data and submitting the form. The institutions were able to draw a distinction between time spent on new systems and programs vs. response preparation. Paraphrases of respondent comments follow:


  • Organizing the data, summarizing the data, completing the questionnaire. We included all activity related to completing the survey, from review of questions, drawing data, more review, etc.

  • Generating the numbers and making the analysis. The things we included there is the time tracking down the cost sharing, subcontract information, and just doing data integrity on the queries themselves. Going back and validating the fields to make sure.

  • There isn’t too much to interpret here. You have to collect your data from different offices and put it together. Here we have a college of engineering and they track many of their own applications …

  • Aggregating information from the queries. This includes data entry on the web.

  • I counted opening the survey, reading the instructions. Not so much filling it out, but the preparation to fill it out.

  • Preparing information to answer the survey. We included everything from the start of compiling data to when it was submitted. We excluded the creation of database queries for new questions.

  • For me to gather the information and finish the report. This is everything from when I first opened the survey to when it was submitted.

  • Anything involved with determining the answer for the survey: research, preparation, gathering information.

  • Using the accounting system to pull the financial data and prepare those data … also working with the departments to make sure they understood the definitions.

  • Reviewing the survey, determining staff involvement, identifying new questions, running queries and reports, and entering data on the web.

  • We reported the amount of time it took to get the information together and answer the questions.



Other Issues Related to the Questionnaire

The last part of the protocol asked respondents about other issues related to the questionnaire. A few respondents commented about issues that they felt were important and wanted to bring to NSF’s attention. Rounding errors occurred throughout the entire questionnaire and this was considered a problem. The survey instrument required that entered numbers match exactly across related fields. This often caused the respondent a great deal of tweaking to make related fields match. Respondents felt that this generated a good deal of unnecessary work before the instrument could be submitted.


Respondents also requested that they be given significant lead time to prepare their data systems to produce the new data. Respondents were very interested in knowing which new questions would become permanent and requested this information in advance of receiving the revised finalized questionnaires.


Examples of these paraphrased comments are shown below:


  • Rounding errors occurred all through this while entering the data. Occasionally when you enter data you get a question where it doesn’t match because you don’t allow for rounding errors. It’s just one more step to go through. If there were some tolerance, you wouldn’t have to arbitrarily choose which one to round up or down. The issue is 600 and 600 is 1200, both round up. But the total rounds down. It [would be helpful] to have us type the exact amount and then have the program do the rounding so you know it is right.

  • I would want to know if this is going to become the new instrument and it’s not going to change further. Or if there are going to be new questions added, so I know [what needs to be developed] so I can have it later.


NSF Help Email Address

The majority of respondents said that a generic help email address was fine with them. Most respondents stressed the important factor was responsiveness. They said that it made little difference where the email went or to whom it was addressed; what mattered was that they received a timely response that addressed their problems or queries. Respondents made the following types of comments regarding the generic email address:


  • Fine to have a generic email address.

  • Fine to have anonymous, as long as response is received.

  • Fine to have an anonymous address; We know you will get us a response.

  • Generic is better, since people come and go. It can be confusing is someone leaves the helpdesk.

  • Generic is fine. It is good to have a contact name for NSF.

  • I think generic worked very well.

  • I guess generic is fine as long as I get a response.

  • Generic is fine as long as someone reads the message. I don’t have a preference either way.


Reactions to Having the Instructions as Part of the Questions

Of the 40 institutions, 30 received the probe regarding how they liked having the instructions integrated with the question, as opposed to having the instructions at the beginning or end of the instrument. Twenty-seven of the 30 who received this probe said that they thought the integration of the instructions with the question was a clear improvement. Three of the 30 said that they had no preference, either because of the way they complete the form or because they were new and had nothing with which to compare the revised survey.


Respondents who liked the integration of the instructions said that this eliminated the difficulty and confusion that jumping back and forth (from question to instructions) entailed. Paraphrasings of these comments are shown below.


  • Huge plus. It was more efficient to make sure that the data were accurate for the question. Didn’t have to flip back and forth.

  • People paid closer attention to the instructions because they were right at the question. I got more questions from others that were involved in completing the survey than in previous years.

  • I like that a lot because I don’t have to flip back and forth. It’s all in one spot.

  • Good to have as part of the questions – don’t’ have to flip back and forth.

  • Very helpful and easy. Didn’t have to refer back.

  • Better at each question.

  • For me it’s better to have the instructions with the question. That way I’m not jumping back and forth and I can read the instructions right there with the questions.

  • Better to have the instructions with each question. More user-friendly.

  • Helpful to have instructions at the question level. Didn’t have to go back to the instructions page.

  • Very helpful. The instructions were right there. Didn’t have to flip around to find the instructions.

  • I thought it was helpful. Having it under each section was helpful so you didn’t have to keep going back and forth through each section.

  • Made it easier; I liked it.

  • I liked having the directions there while I was going through it. You’re more likely to be compliant when it’s staring you in the face. I like having the examples of the disciplines there too because I didn’t have to go back on the web site.


Preference for Two or Three Columns: (1) Federal and Total or (2) Federal, Nonfederal, and Total

Twenty-five institutions commented on whether they had a preference for the three-column format that appears in the revised instrument or the two-column format that appeared in the previous instrument. Ten of the institutions said that they either did not have a preference or that they did not think it made any difference. Two institutions expressed a preference for the format in the previous instruments. One of these institutions said that the three columns entailed a greater data entry effort, so this respondent preferred the two-column format for its perceived easier data entry. Thirteen institutions expressed a preference for the three-column format in the revised instrument. These institutions cited reasons such as having the three columns made it easier to check. There was immediacy to the checking that made the respondent feel sure about the data. Paraphrases for some of the comments are shown below.


Preference for Two Columns

  • More input for three columns; so I prefer just two columns.

  • I would fill in total and then federal. It was one more calculation that I had to split them out … It was one more step I had to do.

No Preference Between Two or Three Columns

  • I already had the information for nonfederal, so it was easy to report the nonfederal.

  • No preference.

  • I don’t really think it makes much difference.

  • Our reports provided the data in federal and nonfederal anyway – so it did not make much difference for me.

Preference for Three Columns

  • I like having all three. It is good to have the breakout of federal and nonfederal for each question.

  • I like having all three columns. It’s easy to check; it makes me feel better about the data.

  • I prefer to have all three – good to have the full detail available.

  • I prefer the three columns. I prefer to have the commotion completely automated; that way I can tie it back to the financial documents I’m using.

  • I think it helps to have the federal and nonfederal columns here. Rather than doing the math in your head, you can figure it out there.

  • I like the new approach [the three columns].

  • I like having all three columns. It is good to have the full breakout.

  • It works well.

  • It is much easier because then you don’t have to recheck your federal stuff.

  • Three columns is useful. It is good to have all the components that make up the total.


Reaction to Having the Totals at the Bottom of Each Question Instead of the Top

Of the 40 institutions, 25 commented on their preference for having the totals at the top of the questions or at the bottom. Five institutions expressed a very weak preference or no preference for having the total line at the bottom of the question. Twenty institutions expressed a great or significant preference for having the total line at the bottom of the question. Those who expressed a weak or no preference for having the total line at the bottom of the question said that they had no preference; no strong preference, or that it simply did not make that much difference. Those who expressed a strong or significant preference said that it was more intuitive to have the total line at the bottom, that it made more sense, that it was more like what they were used to, and that it was more logical. Selected comments are paraphrased below.


A Weak Preference or No Preference for the Total Line at the Bottom of the Question

  • Not a strong preference, but usually, my totals are at the end.

  • No preference.

  • To me it doesn’t make that much difference. I’m probably accustomed to having the total line at the bottom.

Strong or Significant Preference for the Total Line at the Bottom of the Question

  • It makes sense to have it at the bottom.

  • I’m an accountant, so having it at the bottom makes most sense.

  • I prefer totals at the bottom -- more intuitive.

  • I like it at the bottom because that’s where I expect the totals to be.

  • It makes more sense to have the totals on the bottom.

  • It seems reasonable [to have them on the bottom] because that is what I would do in a spreadsheet.

  • I like the totals at the bottom because it makes more sense to the reader if the total is at the bottom.

  • At the bottom is better.

  • I think it makes sense; it follows what you know about math. Visually, it makes more sense.

  • I like the totals at the bottom better … to see the total after the numbers have been entered.

  • I like the totals on the bottom better. With it on the top, it’s like working backwards.

  • I think it is more intuitive to have the totals at the bottom.

  • I’m an accountant; I like totals being at the bottom. It appears more logical.


Reactions to Removal of the Line Numbers

Thirty-one institutions commented on the fact that the line numbers that were present in the previous instrument are no longer present on the revised instrument. Two institutions said that this year was the first time they had completed the survey, so they could not comment on the line numbers in the previous survey. Only 1 institution said that they liked the line numbers; this institution nonetheless said that it was not a problem that the line numbers were gone. Three institutions said that they had used the line numbers in the past, but that they did not miss them this year and that it was fine without the line numbers. Twenty-five institutions said they either had not noticed the line numbers were missing or that they did not need the line numbers. All 31 respondents who commented had no issue with removal of the line numbers.


The following comments were made by the respondents who had used the line numbers in the past:


  • It’s cleaner without the line numbers. I used the line numbers in the past, but that was a manual process. I didn’t need the line numbers this year.

  • I liked the line numbers because it took you to the specific location on the page, but it isn’t a big deal not to have them.

  • I used to identify some things by the line numbers, but as I got into this, I didn’t miss the line numbers.

The following comments were made by the respondents who had not used the line numbers in the past and did not miss the line numbers:


  • Fine to omit line numbers.

  • I don’t think it made a big difference to us.

  • I did not use the line number for anything … I’d prefer not to have it because it is just another number.

  • I didn’t notice it was missing … fine to get rid of line numbers.

  • I thought the line numbers were confusing, so I like that they are gone.

  • We did not use the line number as a reference; we prefer not to have it.

  • We didn’t miss them.

  • I think it makes it a lot clearer without the line numbers.

  • I don’t remember the line numbers; so no difference.

  • I didn’t notice it.

  • I didn’t use them before, so I didn’t miss them.

  • I didn’t use them before.

  • I don’t even remember them.

  • I didn’t use them in previous years and did not miss them.

  • I didn’t notice the difference – can’t remember the line numbers.


Reactions to Other Formatting Changes

Of the 40 institutions, 12 commented on other formatting changes. The probe on other formatting changes was very open, so institutions could comment on anything that felt relevant to them. This gave rise to a very diverse set of comments.


Nine of the 12 institutions that commented made a positive comment about the new survey. These comments focused on something that the institution considered an improvement. Some comments were very general and others were more specific. In general, these comments described how the overall formatting had been improved; how the survey was easy to navigate; how the organization was improved and how the survey flowed better. Paraphrases for a selection of these comments appear below:


  • The overall formatting is fine.

  • I like how some of the columns tie; I can check those things as I’m going through.

  • Overall, I liked the new formatting better.

  • Overall, the survey is easier to navigate – to get from page to page, to get back to the previous page. The old survey was tedious.

  • I had no difficulty navigating at all.

  • Having a lot of examples helps.

  • Much easier to follow in general.

  • To me, it seems like a cleaner survey in terms of being easier to read, easier to get through. I think it is well-organized and it fits well together. The categories are easy, the lines are easy to figure out.


What Respondents Liked Least About the Survey

Thirty-four of the respondents provided a response when probed about what they liked least about the survey. Of the 34, 5 said that they could not think of anything that they liked least. One said there was nothing he/she liked least and then cited the overall burden of the survey as an issue.


The 29 respondents who mentioned something that they liked least usually focused on the burden. Listed below are paraphrased selections from that set of respondents.

  • Questions that were difficult to gather information for: personnel, and breaking out unrecovered indirect cost.

  • The categories for R&D fields changed. This required remapping of the fields.

  • There were questions that we could not provide an answer for.

  • There were questions that we did not see the significance of: the number of PIs, postdocs, etc.

  • There were questions that we could not answer. It is a very valuable survey, but takes a lot of time and effort.

  • I did not like Question 10. It was difficult to get at. Also the one about basic, applied, and development was also difficult.

  • We didn’t like the additional questions and the additional burden. We don’t like to guess at information that isn’t available.

  • Question 17 – research center questions

  • Need to make changes to our database to make the information easier to get at.

  • It was really time-consuming.

  • We can adjust next year, but for this year, it was the most difficult.

  • Some of the new information was not available.

  • Splitting out of nonprofit and business was difficult. The headcounts will take some time to obtain.

  • The questions about the awards and proposals because those are outside of our office.

  • The level of detail involved.

  • The mega tables on Question 9 and 12. They’re chunked rather than being on a spreadsheet.


What Respondents Liked Most About the Survey

Thirty-three of the 40 respondents commented on what they liked most about the survey. Their comments fell into three main categories: (1) 8 respondents named survey content, (2) 17 respondents named features of the web instrument, and (3) 7 respondents had idiosyncratic comments.


The 8 respondents who named survey content as the thing they liked the most focused on the benefit of including non-S&E fields, the benefits of including clinical trials; the definitions being clearer; and the instructions provided at the question level. A sample of paraphrased comments is provided below:


  • The inclusion of the non-S&E is positive.

  • I like that there are more questions on clinical trials because I think that is something we need to identify more and this gives me something to push for at my institution.

  • I appreciated that the survey was more thorough – helped get a better sense of our data.

  • A lot of the definitions were clearer, so it was easy to go through and get the information.

  • The instructions at the question level [were an improvement].

Paraphrases of a sample of the comments from the 17 respondents who named the web instrument as the thing that they liked best are provided below.


  • The web was intuitive to navigate.

  • The survey was well-ordered, intuitive, most questions had clear instructions.

  • The formatting and layout were good. It was easier to navigate and easier to tell where you were.

  • The web was easier than the old version.

  • Filling the survey in on-line was pretty simple.

  • Navigation was much easier.

  • The web site itself was user-friendly – easy to use.

  • Very user-friendly and in places it did calculations for you.

  • I liked the question list.

  • I liked the order of the questions – seemed to make sense.

  • I loved the web! It gave better explanations of what you needed to do and what you were looking for.

  • The web instrument was easy to use – submitting on-line.

The 7 respondents who made idiosyncratic comments said that completing the survey helped them identify shortcomings in their own systems that they will try to address. A few respondents said that completing the survey was good for their institution. They looked at issues that they had never considered before. A sample of paraphrased comments is provided below.


  • There was not anything that I liked “most.”

  • That it did not take 80 hours like estimated.

  • The information generated for the survey can be used internally.

  • Tracking by discipline is useful and the institution will work towards this in the future.

  • I don’t have anything to compare it with.


Important Changes to Make for Next Year

Thirty-four respondents provided comments on the important changes that should be made to next year’s survey. Of the 34 respondents, 10 commented that improved definitions and clarifications would be a change that they would recommend. The terms that respondents mentioned needing improved definitions and clarifications were:


  • Interdisciplinary research

  • Clinical trials

  • Medical schools

  • Proposal and awards

  • Multi-year awards

  • Collaboration and subawards

  • Basic, applied, and development

  • Institutional funds


Respondents also requested that tolerances be added to eliminate rounding error messages. A few respondents said that the survey was too long and detailed and they would welcome a shortened survey. Other respondents said that they would need more warning and longer lead times to produce data for the new items. One respondent requested that the new items be optional to allow institutions time to adjust their databases. Another respondent requested an “information not available” response option. Paraphrases of the respondents’ comments follow.


Improved Definitions and Additional Clarification

  • More clarification on how to report medical school expenditures. Our institution is a medical school only, but we got the impression that we should break out the medical training department separately as the “medical school” on Question 3.

  • More clarification on what you want for multi-year awards. And, the definition of collaborative to include subawards and also interdisciplinary.

  • Put additional information on Questions 11 and 21 to clarify what you’re asking for.

  • More explanation on how to report institutional funds.

  • Clarify how to classify student research – useful for smaller institutions.

More Lead Time to Prepare for the Survey and More Time to Complete the Survey

  • There are some difficult questions – time needed to complete.

  • What is the timing of this going to be next year? … I don’t know that the timing is good for the new survey.

  • The earlier an institution knows about the information they’re going to have to capture, the better. … Give something to them now that they can use to map their system and take pressure off to complete the survey accurately.

  • When do you send out the survey? … Can you give us more time to complete?

  • The sooner I can get the questions out to the campuses, the better off we are going to be. I know everyone is extremely busy …

Survey is Too Long or Too Burdensome

  • [cut some questions] because some of the new items are time consuming and difficult.

  • If there is any way to shorten it?…. If it was a little shorter it would be less daunting.

  • The survey could be shorter – most questions did not apply.

  • The questions where the data are not available could be dropped.

Miscellaneous Comments

  • If there were new questions in the past, you made them optional.

  • If Question 6 remains, it would be helpful to have more education with the [research] community … more discussions with other conferences and organizations would be helpful.

  • Make the Excel option and the Survey Resources page more prominent.

  • Tolerance for rounding errors.

  • Offer the response option of “data not available.”


Other Closing Comments

The comments made in closing were mainly reiterations of comments made in previous sections. All of the comments made in closing were either reported earlier in this report or were of a slightly trivial nature that would not affect question evaluation. The closing comments reported here are shown simply to give the flavor of how the interviews closed.


In general, some respondents said that they appreciated being part of the pilot test. They understood the importance of the feedback they provided and understood that their feedback would help shape future surveys. A few respondents commented that the revised survey generated internal discussions that proved beneficial for internal operations. A few respondents reiterated the issues that had caused them the most grief. A sample of these paraphrased comments appears below.


  • We enjoyed being a part of the test – appreciated the opportunity to give feedback.

  • Participating in the pilot test was not as painful as I thought it was going to be. The questions were pretty much what we expected based on the conversations of a year and a half ago.

  • We had a lot of discussions about what we should report as interdisciplinary.

  • The online survey was perfect! I had no error, no snags. I was pleasantly surprised that it went so well.

  • I’d much rather be in the test so I can use the second half of this year to put things in place so we can be ready for the survey next year.

  • Overall, it was a tough process and there was a lot of work involved. But if it was helpful, then OK.

  • It was more time than we put in last year, but we did not change the way we do things.

  • We’d like to have the final survey as soon as possible so we can change our database.

  • [the revised survey] generated a lot of conversation about how to track the information in the future. Some changes are underway. The survey increased awareness of the need to update tracking procedure.




Online Application

General Reactions to Using the Web Application

During the debriefing, 32 respondents commented on the general ease or difficulty of using the web application to submit their data. Twenty-four respondents made some type of comment about the ease of use, in words such as: “well put together and easy to use,” “smooth experience,” “easier to navigate through compared to the other survey,” “overall system worked well,” etc. One respondent with a different viewpoint said: “To be honest it looked a little blander than the past few years,” and “Visually it’s much cleaner, but beyond that it’s not very different as far as I’m concerned for what had to be input on our end.” Eight respondents who used the web application to review their uploaded Excel version of the survey also said that they thought it was an easy process.


Respondents mentioned that the following specific components or features of the application were helpful.


  • Error messages

  • What’s New” section at the beginning

  • Automatic totaling

  • Save and exit to return later

  • Comparison of totals among linked questions

  • Instructions included with questions

Problems that respondents reported having with the application were:

  • Errors in auto-totaling in Questions 9 and 16 (3 respondents);

  • Being prompted to enter data on some screens for which respondent had no data (e.g., Question 9); and

  • Printing the survey (right side was cut off in portrait orientation; respondent changed to landscape in order to print full width of response).

Reactions to Question List

In 31 of the debriefing interviews, respondents were asked to react to the layout and functionality of the question list. Thirty of these 31 respondents made positive comments about the way the status indicators informed them of individual question completion status. For example, 1respondent said “I like it because it’s a table of contents;” another commented on good use of color. The negative comment came from the same respondent mentioned in the previous sections, who said: “This is what I was alluding to before about how it looks bland compared to prior years.”


In addition, many respondents mentioned liking the layout and organization of the page and the ease of navigating from it to any specific question. Even a respondent who said that a problem with auto-totaling required “jumping around while trying to determine what the problem was” said that the Question List made it easy to move around. One respondent who uploaded the Excel version commented on the ease of using the Question List to “validate” the data. The respondent liked making changes directly into the online version, rather than updating the Excel version and uploading it again.


Reactions to Question Layout

Twenty-two of the respondents commented on the way the questions appeared onscreen. All liked the question presentation. Six respondents specifically mentioned the ease of working with the same layout of web and PDF versions. Two respondents made suggestions for revision, one regarding labeling and the other regarding question order:


  • It would be helpful on Question 9 if at the top of each chart you said ‘FEDERAL’ in capital letters.” The respondent initially thought Question 9 asked for all sources and did not understand why the total did not add up to the grand total for Question 1.

  • We initially started with Question 1… but then I went through the whole survey to realize I kind of had to work backward. I don’t know if you can put those summary questions at the end, but it seemed like you were working in reverse.” The respondent would have preferred to respond to Questions 9 and 12 (detail level) before answering Question 1.


Reactions to Navigation

Approximately 30 respondents commented on the navigation methods provided. Almost all respondents were satisfied with the ease of navigation. Eleven explicitly described moving in sequential order through the questions; they relied mainly on using the “Save and Go to Next Question” button. Others described using the Question List screen to choose a question to complete and to check on completion status of individual questions. Several respondents mentioned making use of and liking the capability to save and exit the survey and return later.


Four respondents discussed the following navigation issues or suggestions.


  • Two stated that they found it difficult to move between multiple–page questions or non-sequential questions. One did not like the structure of the larger matrix questions (9, 12, and 16) because it was difficult to “flip between screens” and difficult to resolve rounding problems. The other respondent stated: “It would have been nice to pick which question you could go to,” e.g., from Question 9 directly to Question 15 via a list of questions [always visible, as opposed to the list on the Question List page]. The respondent acknowledged: “It’s probably designed the way it needs to be designed, though.”

  • Another respondent initially said that it would be useful to have a button to return to the previous question, but when probed about that, said: “It’s not that big of a deal to go back to the main screen and click where you want to go to.”

  • One respondent expressed caution about the navigation options. The respondent reported using the “Save” button before using the “Save and Go to Next Question” button, “because I was not sure it would actually save when it went to the next question.”


Reactions to Data Checks and Warnings

The process of working through data checks and warnings was discussed with approximately three-quarters of the respondents. Twenty-five respondents described the role of the web application in working through data checks and warnings. Most of these respondents talked in positive terms about how the application helped them to catch data entry errors and/or rounding issues. Several specifically mentioned that they liked having a choice: correcting as they moved from question to question or waiting until a later point (e.g., before reviewing/submitting).


Two respondents explained that they had trouble figuring out how to indicate when a page was complete – when there were no data to report. This situation arose on some of the pages of a multi-page question, such as Question 9. One of these respondents said that explicit instructions were needed for how to handle this situation.


One respondent reported receiving data checks for having some blank questions. She initially expected to have to fill zeros for questions without data. It took her a few tries to figure out how to get to the “data not available” indicators at the review stage. After encountering the message at the review stage, she indicated that data were not available. The respondent suggested providing something with questions to allow respondents to indicate when data are not available.


Twenty-six respondents were asked specifically about rounding of numbers to thousands and any resultant experiences with data checks. Seven of the 26 respondents said that they did not recall having any data checks for rounding reasons or said they had no problems with rounding error, e.g., due to working these out on the paper version before entering data to the web application. One respondent said she was fine with reporting dollars in thousands because reporting actual dollars might not eliminate rounding issues.


Eighteen of the 26 respondents recalled having had rounding errors. All of these respondents said that it was easy to fix the rounding errors, but several complained a bit about the time it took or expressed frustration about the steps required to correct rounding problems. Two participants suggested changing the underlying survey programming to allow some tolerance for rounding error. One said “plus or minus one thousand dollars isn’t significant;” the other said “one or two thousand dollars would be sufficient [to allow for tolerance].”


Feedback About Comment Boxes

In 27 of the interviews, respondents were asked about their use or non-use of the comment boxes when completing the web application. Sixteen of the respondents indicated that they used a comment box one or more times; one of these entered comments into the Excel form. Examples of circumstances given for supplying comments included:


  • “…for questions that had issues”

  • For questions where they felt additional information was needed (such as why data were not available)

  • For “our leadership, they wanted to see the survey and kind of understand where our numbers were coming from and how we were determining that” -- context in reviewing the survey

  • When there is a problem or question that I have to explain. This year we just didn’t have the ability to break the data down the way the survey wanted it. Also if there are large increases or decreases from year to year they ask us to provide an explanation of why in the comment box.”

  • It was useful to the respondent (as well as to NSF) to have context for a response

  • Comment boxes were only used if she could not answer a question

The respondents who did not use the comment boxes said that they did not see the need to comment. One respondent cited individual style -- she explained her reason as: “I’m just not a comment-y person. I don’t typically use them.” One respondent said that he normally does not comment unless asked to, and went on to explain that on the old survey, he got prompted to explain things like an increase over a prior year for a specific question.

Some of the respondents were asked for their opinion about the suggestion to add comment boxes to the PDF version of the survey. They were evenly divided; half said it would be okay or would not hurt to do so, while the other half said comment boxes were not needed on the PDF.


Suggestions for Making the Response Process Simpler or More Efficient

When asked if there were ways to make the response preparation and submission process more efficient, 17 of 23 respondents replied no, or some variation of that, such as: “It didn’t take much time to do the web entry,” “had all of the web tools that I needed,” “I think it was pretty easy to navigate through,” and “Once I actually got into putting it into the computer it took me 5 – 10 minutes; I don’t know how it could be any faster.”


Respondents did make a few suggestions for easing the process:


  • Make links back to a non-matching total for the questions are supposed to match – a quick way to resolve a data check and move back and forth between linked questions.

  • Combine two questions: “It would be nice if Question 11 and Question 13 were together.”

  • Provide a set of FAQs on the main page; “more definitions – like interdisciplinary, multidisciplinary; there wasn’t really a place where you could search for definitions.”

  • Streamline” the questions with big tables, to make them easier to navigate through and remain oriented while entering data.


Use of and Reaction to Excel Data Entry Option

The Excel data entry option was discussed in 34 of the debriefing interviews. Eight of these respondents uploaded an Excel datafile, then worked through the data check process in order to submit their survey. (Note that a ninth institution submitted a response via the Excel upload alternative, but the debriefing interview did not cover the respondent’s reaction to using Excel.)


The 8 Excel respondents thought that using this alternative was an easy way to prepare their response. One said that he is just more comfortable using Excel. A ninth respondent tried to use the Excel version, had a problem with auto-totaling on Questions 9 and 12, so instead switched to completing the online version and was satisfied with that experience.


Of the majority of respondents who used the online version instead of Excel, 14 explicitly said during the interview that they noticed the option but chose to complete the online version. These respondents said they find it easy to use the online version, had already compiled their data into the hard copy PDF, or just continued to use the approach that has worked for them in the past. Several said they would consider using Excel in the future. Four respondents mentioned during their interviews that they did not notice the Excel option.


Five of the 8 respondents who discussed their Excel submission were asked for their opinion about the grouping and placement of the comment boxes in separate tabs. Two of them used the boxes within the Excel file. One said it was better to have them in a separate tab so as not to interfere with data entry, and the other said: “I guess it did get a little confusing flipping back and forth. It did help that they were grouped together with questions related to each other. I don’t know if there’s any way to keep the comment box underneath the question like on the web? Having the comments printout next to the question could be useful.” One respondent did not use the comment boxes while filling the Excel file, but then did while completing the review process. Two others saw the placement of the comment boxes, but did not use them. One ran out of time and the other did not feel a need to comment.



Reactions to and Use of Survey Resources Page

During 30 of the debriefings, respondents were asked to click the link to view the Survey Resources page. Eleven respondents said they either did not see the page while working on the survey, or could not recall whether they saw it. While viewing it in the debriefing, some of these respondents noted that they might have made use of some of the resources such as the trend report. Others said that while it was nice to have these options, they did not need them to complete the survey.


Ten respondents said they used the page, and commented in some fashion about finding the collection of resources helpful. Five of these respondents said they printed their 2009 data and/or the prior years’ data from the page. Six respondents mentioned using the page for other resources, such as the definitions and instructions page. One mentioned noting the link to CIP codes, but did not need it due to not having new grants to obtain numbers for. One respondent used the page a few times to “pull out the quick links rather than pull out the hard copy.” Another respondent commented that it was good to have a location to find instructions, links for definitions, and printing data, and called it a “nice little shopping area.”



Reactions to and Use of Web Survey Features Page

Of 26 respondents asked whether they saw the Web Survey Features page, four said they looked at it online. Two of these four respondents said that although they looked at it, they did not really use it, either because it was already in the PDF or they wanted to see if they could do the survey on their own.


Twenty-two respondents said they had not looked at the page online. These respondents felt they did not need the information because they found the survey easy to complete and already had the instructions if they had printed a hard copy. Many of the respondents said that the instructions were not needed. They gave reasons such as: “the entire survey is to me pretty self-explanatory,” “able to use the web without issue,” “Overall the survey was very easy,” “Easy to understand different icons, ready to submit screens, etc.”



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPat Dean Brick
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy