National Science Foundation
Part
2: Computing and Networking Capacity
(for
research and instructional activities)
FY
2013 Survey of Science and
Engineering Research Facilities
Who should be contacted if clarification of Part 2 networking or computing answers is necessary?
Contact 1 Contact 2
Name: __________________________________ __________________________________
Title/position: __________________________________ __________________________________
Telephone: __________________________________ __________________________________
Email address: __________________________________ __________________________________
Please complete the questionnaire and send it to your institutional coordinator according to the arrangements you made with your institutional coordinator. You may complete this questionnaire online at www.facilitiessurvey.org. You will need to click on “Part 2” and then enter the survey ID and password obtained from your institutional coordinator.
If
you have a question, please contact Lorraine Lewis via e-mail at
[email protected] or call
1-888-811-1838. The survey
director at the National Science Foundation is Mr. Michael Gibbons.
If you do not have exact figures for any part of this questionnaire,
please provide estimates.
Thank you for your participation.
OMB #3145-0101
Expiration date: 10/31/2014
Question 1 on total bandwidth has been modified to allow for precise reporting of bandwidth.
Question 2 on bandwidth through consortia has been modified to clarify that Internet2 and National LambdaRail should not be considered consortia for the purposes of this question.
Question 3 on connections to Internet2 and National LambdaRail has been added.
Question 4 on dark fiber has been modified to clarify that indefeasible rights of use should be reported and to capture ownership of dark fiber for the next fiscal year.
Questions 5 through 11 on centrally administered high-performance computing (HPC) have been modified to include only systems that are 10 teraflops or faster.
Question 5 on architectures for centrally administered HPC has been modified to drop the restriction that only systems that are “generally available to the campus community” should be considered. In addition, the category for special purpose architectures has been dropped; such architectures can be reported as “other architecture.”
Question 7 on centrally administered HPC systems has been added.
Questions 9 through 11 on storage for centrally administered HPC have been modified to allow for precise reporting of storage capacity.
Question 12 on archival storage from external cloud services has been added.
Question 13 on research computing from external cloud services has been added.
Question 14 on survey completion time has been added.
Twelve questions from the last survey cycle have been deleted (question numbers shown below refer to those appearing in the FY 2011 survey):
Internet2 bandwidth (Question 2)
National LambdaRail bandwidth (Question 3)
Federal government research network connections (Question 4)
Desktop port connections (Question 6)
Speed on your network (Question 8)
Wireless connections (Question 9)
Comments on networking (Question 10)
Centrally administered clusters of 1 teraflop or faster (Question 13)
Centrally administered MPP of 1 teraflop or faster (Question 14)
Centrally administered SMP of 1 teraflop or faster (Question 15)
Centrally administered experimental/emerging computing systems of 1 teraflop or faster (Question 16)
Centrally administered special purpose computing systems of 1 teraflop or faster (Question 17)
Questions 1–4 include networking capacity for: research, instruction, and residence halls.
1. At the end of your FY 2013, what was your institution’s total bandwidth including the commodity internet (Internet1), Internet2, and the National LambdaRail? What is your estimate of this total for your institution at the end of your FY 2014?
Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second, e.g. megabits per second (Mbps) or gigabits per second (Gbps).
Commodity internet (Internet1) is the general public, multiuse network often called “the Internet.”
Internet2 is a high-performance IP (layer 3) network service with options for Layer 2 and Layer 1 networking. The network was designed for research and education to provide leading-edge production services as well as a platform for the development of new networking ideas and protocols.
National LambdaRail (NLR) is an advanced optical network infrastructure for research and education. NLR enables cutting-edge exploration in the sciences and network research.
Please do not include:
Redundant connections, which are not normally active but available if a failure occurs with the active connection; or
Burstable bandwidth.
|
Speed |
||
|
(Mark one “X” for each row.) |
||
Fiscal year |
Total bandwidth |
Mbps |
Gbps |
a. At the end of FY 2013 _________ |
|
|
|
b. Estimated at the end of FY 2014 _________ |
|
|
2. At the end of your FY 2013, did your institution obtain any of its bandwidth through a consortium? Do you expect to obtain bandwidth through a consortium at the end of your FY 2014?
A consortium is a collaboration of any combination of educational institutions (e.g., university system, regional collaboration), state and local agencies, vendors, or non-profit organizations with the purpose of coordinating and facilitating networking activities.
Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second, e.g. megabits per second (Mbps) or gigabits per second (Gbps).
For the purposes of this question, Internet2 and NLR are not considered to be consortia.
|
(Mark one “X” for each row.) |
|
Fiscal year |
Yes |
No |
a. Bandwidth through consortia at the end of FY 2013 |
|
|
b. Expect bandwidth through consortia at the end of FY 2014 |
|
Please provide the names of all consortia through which you expect to obtain bandwidth at the end of your FY 2014.
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
3. At the end of your FY 2013, did your institution have connections to Internet 2 or National LambdaRail? Do you expect to have connections to either of these networks at the end of your FY 2014?
Internet2 is a high-performance IP (layer 3) network service with options for Layer 2 and Layer 1 networking. The network was designed for research and education to provide leading-edge production services as well as a platform for the development of new networking ideas and protocols.
National LambdaRail (NLR) is an advanced optical network infrastructure for research and education. NLR enables cutting-edge exploration in the sciences and network research.
|
(Mark one “X” for each row.) |
|
At the end of FY 2013 |
Yes |
No |
a. Internet2 |
|
|
b. National LambdaRail |
|
|
|
|
|
Expected at the end of FY 2014 |
Yes |
No |
c. Internet2 |
|
|
d. National LambdaRail |
|
4. At the end of your FY 2013, did your institution own or have an indefeasible right of use (IRU) contract for any dark fiber to your institution’s internet service provider (ISP) or between your institution’s buildings? Do you expect to own or have an indefeasible right of use contract (IRU) for any dark fiber to your ISP or between your institution’s buildings at the end of your FY 2014?
Dark fiber is
fiber-optic cable that has already been laid but is not being used.
Include only fiber that was dark
(i.e., unlit) when it was
purchased by your institution.
Indefeasible right of use (IRU) is a contractual agreement between the operators of a communications cable, such as a fiber optic network, and a client. An IRU provides for the exclusive, unrestricted, and indefeasible right to use one, a pair, or more strands of a fiber cable for any legal purpose over a given period of time.
|
(Mark one “X” for each row.) |
|
At the end of FY 2013 |
Yes |
No |
a. To your institution’s ISP |
|
|
b. Between your institution’s buildings |
|
|
|
|
|
Expected at the end of FY 2014 |
Yes |
No |
c. To your institution’s ISP |
|
|
d. Between your institution’s buildings |
|
If you would like to provide any comments regarding your institution’s networking, please include them in Question 15 at the end of the survey.
5. At the end of your FY 2013, did your institution provide centrally administered high-performance computing (HPC) of 10 teraflops or faster at peak performance for each type of architecture listed below? If you had a high-performance computing system (10 teraflops or faster) with an accelerator component (e.g., GPU, Intel MIC), please report that system under the one most appropriate architecture below.
Centrally administered HPC is located within a distinct organizational unit with a staff and a budget, the unit has a stated mission that includes supporting HPC needs of faculty and researchers.
If some of your high-performance computing systems are slower than 10 teraflops and some are faster, please report only the systems that are 10 teraflops or faster.
Had at end of FY 2013
(Mark one “X” for each row.)
Cluster
This
architecture uses multiple commodity systems each running its own
operating system with an Ethernet based (e.g., 10Mb/100Mb/GigE) or
high-performance interconnect network (e.g., InfiniBand or Myrinet)
to perform as a single system.
Massively parallel
processors (MPP)
This
architecture uses multiple processors within a single system with a
specialized high-performance interconnect network. Each processor
uses its own memory and operating system (e.g., IBM Blue Gene, Cray
XT5 and XE6).
Symmetric multiprocessors
(SMP)
This
architecture uses multiple processors sharing the same memory and
operating system to simultaneously work on individual pieces of a
program (e.g., SGI Altix UV, HP
Superdome, IBM Power 775).
Parallel
vector processors (PVP)
This
architecture uses multiple vector processors sharing the same memory
and operating system to simultaneously work on individual pieces of
a program.
Experimental/Emerging
architecture (Please describe.)
This
architecture uses technologies not currently in common use for HPC
systems.
___________________________________________
Other architecture (Please describe.)
___________________________________________
6. How many of the centrally administered high-performance computing systems you reported in Question 5 (a-f) have accelerators (e.g., GPU, Intel MIC)?
If your institution did not
report any centrally administered
HPC, check this box and go to
Question 13
Number of systems with accelerators (If none, enter “0.”) __________ systems
7. At the end of your FY 2013, what was the peak theoretical performance of (a) all your centrally administered HPC systems of 10 teraflops or faster, and (b) your fastest system? What was the architecture of your fastest system? Include only HPC systems that are centrally administered.
If some of your systems for high-performance computing are slower than 10 teraflops and some are faster, please report only the systems that are 10 teraflops or faster.
If you have only one HPC system that is 10 teraflops or faster, report the same number for rows a and b.
|
Number of teraflops |
|
a. All systems of 10 teraflops or faster __________ |
||
b. Fastest system of 10 teraflops or faster __________ |
||
c. Architecture of fastest system reported in (b) above. See question 5 for architecture definitions. |
||
(Mark one “X”) |
||
Cluster |
||
Massively parallel processor (MPP) |
||
Symmetric multiprocessor (SMP) |
||
Parallel vector processor (PVP) |
||
Experimental/Emerging |
||
Other |
8. During your FY 2013, which types of external users listed below used any of your institution’s centrally administered HPC of 10 teraflops or faster?
Used
your HPC during
FY 2013
(Mark one “X” for each row.)
Colleges
and universities
Include
public and private academic institutions and systems.
Governments
Include
local, state, and regional jurisdictions.
Non-profit
organizations
Include
legal entities chartered to serve the public interest and
that
are exempt from most federal taxation.
Industry
Include
for-profit companies, either publicly or privately held.
Other (Please describe.)
___________________________________________
___________________________________________
9. At the end of your FY 2013, what was the total usable online storage available for centrally administered HPC of 10 teraflops or faster?
Usable storage is the
amount of space for data storage that is available for use after the
space overhead required by
file systems and applicable RAID
(redundant array of independent disks) configurations is removed.
Online storage
includes all storage providing immediate access for files and data
from your HPC systems (of at least
10 teraflops). Storage can be
either locally available to specific HPC systems or made available
via the network. For example, storage may be available via SAN
(storage area network) or NAS (network attached storage)
environments.
|
(Mark one “X”) |
||
|
Storage |
Terabytes |
Petabytes |
Usable online storage at the end of FY 2013 (If none, enter “0.”) _________ |
|
|
10. At the end of your FY 2013, how much of the usable online storage reported in Question 9 was shared storage?
Usable storage is the
amount of space for data storage that is available for use after the
space overhead required by
file systems and applicable RAID
(redundant array of independent disks) configurations is removed.
Online storage
includes all storage providing immediate access for files and data
from your HPC systems (of at least
10 teraflops). Storage can be
either locally available to specific HPC systems or made available
via the network. For example, storage may be available via SAN
(storage area network) or NAS (network attached storage)
environments.
Shared storage includes the portion of online storage that is available simultaneously to multiple HPC systems (of at least 10 teraflops) via a network making use of SAN, NAS, file system mounting, or similar technologies.
|
(Mark one “X”) |
||
|
Storage |
Terabytes |
Petabytes |
Usable shared storage at the end of FY 2013 (If none, enter “0.”) _________ |
|
|
|
|
11. At the end of your FY 2013, what was the total archival storage available specifically for centrally administered HPC of 10 teraflops or faster? Do not include backup storage.
Archival storage can be either on-line or off-line. It is typically long-term storage for files and data and does not support immediate access from your HPC resources.
|
(Mark one “X”) |
||
|
Storage |
Terabytes |
Petabytes |
Archival storage at the end of FY 2013 (If none, enter “0.”) _________ |
|
|
12. What percentage of the archival storage reported in Question 11 was provided by external cloud services? Do you expect to obtain or increase archival storage capacity provided by external cloud services in FY 2014?
Archival storage can be either on-line or off-line. It is typically long-term storage for files and data and does not support immediate access from your HPC resources.
Cloud services are the use of external computing resources that are delivered as a service over a network (typically the Internet).
|
|
|
Percent |
||
provided by cloud services (If none, enter “0.”) _________% |
||
|
||
|
(Mark one “X”) |
|
|
Yes |
No |
|
|
13. During your FY 2013, did your institution obtain any research computing cycles from external cloud services? Do you expect to obtain any research computing cycles from external cloud services during your FY 2014? Were any of these research computing cycles obtained through a central IT department?
Cloud services are the use of external computing resources that are delivered as a service over a network (typically the Internet).
|
(Mark one “X” for each row.) |
|
During FY 2013 |
Yes |
No |
a. Computing cycles from external cloud services |
|
|
b. If Yes to (a), obtained through central IT department |
|
|
|
|
|
Expected during FY 2014 |
Yes |
No |
c. Computing cycles from external cloud services |
|
|
d. If Yes to (c), obtained through central IT department |
|
Please provide the names of all the external cloud service providers from which your institution obtained research computing cycles.
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
14. Considering all offices involved, approximately how long did it take your institution to complete Part 2 of this survey? Please round to the nearest hour. If it took less than 1 hour, report 1 hour.
Number of hours (rounded to nearest hour) __________
15. Please add any comments for Part 2 below.
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
Thank
you. This is the end of Part 2. Please send this part of the
survey
to your institutional coordinator or enter your responses into the
web survey according to the arrangements you made with your
institutional coordinator.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | 8932.10.05: 2011 NSF Facilities Survey Part 2 |
Author | Eric Jodts |
File Modified | 0000-00-00 |
File Created | 2021-01-28 |