|
National Science Foundation National Institutes of Health |
|
Part 2: Computing and Networking Capacity
(for research and instructional activities)
FY 2007 Survey of Science and
Engineering Research Facilities
|
|
Who should be contacted if clarification of Part 2 answers is necessary?
Name: Telephone: Title/position: E-mail address: |
Please complete the questionnaire and submit it according to the arrangements you made with your institutional coordinator named in the label above. You may complete this questionnaire online at www.facilitiessurvey.org. You will need to click on “Part 2” and then enter the survey ID and password printed on the label above.
If you have a question, please contact [name] of [contractor] via e-mail at [email address] or call [toll-free number.] The survey director at the National Science Foundation is Dr. Leslie Christovich.
If you do not have exact figures for any part of this questionnaire, please provide estimates.
Thank you for your participation.
OMB #3145-0101
Question 1: Commodity internet (Internet1) and Abilene (Internet2) total bandwidth
1. At the end of your FY 2007, what was your institution’s total bandwidth to the commodity internet (Internet1) and Abilene (Internet2)? What is your estimate of the total for your institution at the end of your FY 2008?
Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second.
Commodity internet (Internet1) is the general public, multiuse network often called the “Internet.”
Abilene (Internet2) is a high performance backbone network managed by the Internet2 consortium of academia, industry, and government. The purpose of Internet2 is to develop and deploy advanced network applications and technologies.
Please do not include: |
|
Please include networking capacity for research, instruction, and residence halls.
|
Total bandwidth
(Mark one “X” for each column.) |
|||
Speed |
At end of FY 2007 |
Estimated at end of FY 2008 |
|
|
a. No
bandwidth to EITHER commodity internet |
|
|
||
b. Less than 1.6 megabits/second |
|
|
||
c. 1.6 to 9 megabits/second |
|
|
||
d. 10 megabits/second |
|
|
||
e. 11 to 45 megabits/second |
|
|
||
f. 46 to 99 megabits/second |
|
|
||
g. 100 megabits/second |
|
|
||
h. 101 to 155 megabits/second |
|
|
||
i. 156 to 622 megabits/second |
|
|
||
j. 623 to 999 megabits/second |
|
|
||
k. 1 to 2.4 gigabits/second |
|
|
||
l. 2.5 to 9 gigabits/second |
|
|
||
m. 10 gigabits/second |
|
|
||
n. More than 10 gigabits/second |
|
|
||
o. Other (Please specify.)
|
|
|
Question 2: Abilene (Internet2) bandwidth
Questions 2-11 include networking capacity for: research, instruction, and residence halls. |
2. At the end of your FY 2007, what was your institution’s bandwidth to Abilene (Internet2)? What is your estimate of the bandwidth to Abilene at the end of your FY 2008?
Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second.
Abilene (Internet2) is a high performance backbone network managed by the Internet2 consortium of academia, industry, and government. The purpose of Internet2 is to develop and deploy advanced network applications and technologies.
Please do not include redundant connections. A redundant connection is not normally active but is available if a failure occurs with the active connection.
|
Bandwidth for Abilene
(Mark one “X” for each column.) |
|||
Speed |
At end of FY 2007 |
Estimated at end of FY 2008 |
|
|
a. No bandwidth to Abilene (Internet2) |
|
|
||
b. Less than 1.6 megabits/second |
|
|
||
c. 1.6 to 9 megabits/second |
|
|
||
d. 10 megabits/second |
|
|
||
e. 11 to 45 megabits/second |
|
|
||
f. 46 to 99 megabits/second |
|
|
||
g. 100 megabits/second |
|
|
||
h. 101 to 155 megabits/second |
|
|
||
i. 156 to 622 megabits/second |
|
|
||
j. 623 to 999 megabits/second |
|
|
||
k. 1 to 2.4 gigabits/second |
|
|
||
l. 2.5 to 9 gigabits/second |
|
|
||
m. 10 gigabits/second |
|
|
||
n. More than 10 gigabits/second |
|
|
||
o. Other (Please specify.)
|
|
|
Question 3: Commodity internet (Internet1) bandwidth
3. At the end of your FY 2007, what was your institution’s bandwidth to the commodity internet (Internet1)? What is your estimate of the bandwidth to the commodity internet at the end of your FY 2008?
Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second.
Commodity internet (Internet1) is the general public, multiuse network often called the “Internet.”
Please do not include: |
|
|
Bandwidth
for commodity
(Mark one “X” for each column.) |
|||
Speed |
At end of FY 2007 |
Estimated at end of FY 2008 |
|
|
a. No bandwidth to commodity internet (Internet1) |
|
|
||
b. Less than 1.6 megabits/second |
|
|
||
c. 1.6 to 9 megabits/second |
|
|
||
d. 10 megabits/second |
|
|
||
e. 11 to 45 megabits/second |
|
|
||
f. 46 to 99 megabits/second |
|
|
||
g. 100 megabits/second |
|
|
||
h. 101 to 155 megabits/second |
|
|
||
i. 156 to 622 megabits/second |
|
|
||
j. 623 to 999 megabits/second |
|
|
||
k. 1 to 2.4 gigabits/second |
|
|
||
l. 2.5 to 9 gigabits/second |
|
|
||
m. 10 gigabits/second |
|
|
||
n. More than 10 gigabits/second |
|
|
||
o. Other (Please specify.)
|
|
|
Question 4: Commodity internet (Internet1) connections
4. At the end of your FY 2007, how many lines did your institution have to the commodity internet (Internet1) at each of the connection speeds listed below? Please estimate this information for your FY 2008.
Commodity internet (Internet1) is the general public, multiuse network often called the “Internet.”
If your institution has bonded lines, please report the speed of the bonded lines together and count as one line. For example, if your institution has two T1 lines joined to act as a single line, report the speed as 3 megabits/second.
Please do not include: |
||||
|
||||
|
Number of lines |
|
||
Connection speed |
At
end of |
Estimated at end of FY 2008 |
|
|
a. No bandwidth to commodity internet (Internet1) |
|
|
||
b. Less than 1.6 megabits/second ____________ |
____________ |
|
||
c. 1.6 to 9 megabits/second ____________ |
____________ |
|
||
d. 10 megabits/second ____________ |
____________ |
|
||
e. 11 to 45 megabits/second ____________ |
____________ |
|
||
f. 46 to 99 megabits/second ____________ |
____________ |
|
||
g. 100 megabits/second ____________ |
____________ |
|
||
h. 101 to 155 megabits/second ____________ |
____________ |
|
||
i. 156 to 622 megabits/second ____________ |
____________ |
|
||
j. 623 to 999 megabits/second ____________ |
____________ |
|
||
k. 1 to 2.4 gigabits/second ____________ |
____________ |
|
||
l. 2.5 to 9 gigabits/second ____________ |
____________ |
|
||
m. 10 gigabits/second ____________ |
____________ |
|
||
n. More than 10 gigabits/second ____________ |
____________ |
|
||
o. Other (Please specify.) ____________
|
____________ |
|
Question 5: Bandwidth from consortia
5. At the end of your FY 2007, did any of your institution’s bandwidth come from a consortium? Do you expect to obtain bandwidth from a consortium at the end of your FY 2008?
A consortium is a collaboration of any combination of educational institutions (e.g., university system, K-12), state and local agencies, network infrastructure operators (e.g., Internet2), vendors, health care organizations, or non-profit organizations with the purpose of coordinating and facilitating networking activities.
Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second.
|
(Mark one “X” for each row.) |
|||
Fiscal year |
Yes |
No |
|
|
a. Bandwidth from consortia at the end of FY 2007 |
|
|
||
b. Bandwidth from consortia at the end of FY 2008 |
|
|
Please provide the names of all consortia from which you expect to obtain bandwidth at the end of your FY 2008.
|
Question 6: High performance network connections
6. At the end of your FY 2007, did your institution have connections to the following high performance networks? Do you expect to have connections to any of these networks at the end of your FY 2008?
A high performance network is characterized by high bandwidth, low latency, and low rates of packet loss. Additionally, a high performance network is able to support delay-sensitive, bandwidth-intensive applications such as distributed computing, real-time access, and control of remote instrumentation.
Abilene (Internet2) is a high performance backbone network managed by the Internet2 consortium of academia, industry, and government. The purpose of Internet2 is to develop and deploy advanced network applications and technologies.
National LambdaRail is an initiative of research universities and technology companies to provide a national infrastructure for research and experimentation in networking technologies and applications.
ESnet is the Department of Energy’s Energy Sciences Network.
NREN is the NASA Research and Education Network.
|
(Mark one “X” for each row.) |
|||
At the end of FY 2007 |
Yes |
No |
|
|
a. Abilene |
|
|
||
b. National LambdaRail |
|
|
||
c. Federal
government research network |
|
|
||
d. State or regional high performance network |
|
|
||
e. Other (Please specify.)
|
|
|
||
|
|
|||
Estimated at the end of FY 2008 |
Yes |
No |
|
|
f. Abilene |
|
|
||
g. National LambdaRail |
|
|
||
h. Federal
government research network |
|
|
||
i. State or regional high performance network |
|
|
||
j. Other (Please specify.)
|
|
|
Question 7: Desktop port connections
7. At the end of your FY 2007, what percentage of your institution’s desktop ports had hardwire connections at each of the speeds listed below? What percentage do you estimate will be at these speeds at the end of your FY 2008? If your answer is between 0 and 1 percent, please round to 1 percent.
Please report on the capacity of the ports themselves and not the speed of the workstations connected to them. Also, do not include servers when determining your responses.
|
Percentage of desktop ports |
|
|
|
|
Speed of connection |
At
end of |
Estimated
at end |
a. 10 megabits/second or less __________% |
__________% |
|
b. 100 megabits/second __________% |
__________% |
|
c. 1 gigabit/second or more __________% |
__________% |
|
d. Other (Please specify.) __________%
|
__________% |
|
Total 100% |
100% |
Question 8: Type of cable for desktop ports
8. At the end of your FY 2007, what percentage of your institution’s desktop ports were connected to your institution’s network by the following types of cable? What percentages do you estimate at the end of your FY 2008? If your answer is between 0 and 1 percent, please round to 1 percent.
Please do not include servers when determining your responses.
|
Percentage of desktop ports |
||
|
|
||
Type of cable |
At
end of |
Estimated at end of FY 2008 |
|
a. Unrated __________% |
__________% |
||
b. Category 3 __________% |
__________% |
||
c. Category 5 __________% |
__________% |
||
d. Category 5e __________% |
__________% |
||
e. Category 6 __________% |
__________% |
||
f. Other (Please specify.) __________%
|
__________% |
||
Total 100% |
100% |
Question 9: Dark fiber
9. At the end of your FY 2007, did your institution own any dark fiber to your institution’s internet service provider (ISP) or between your institution’s buildings? Do you plan to acquire any dark fiber to your ISP or between your institution’s buildings during your FY 2008?
Dark fiber is fiber-optic cable that has already been laid but is not being used. Include only fiber that was dark (i.e., unlit) when it was purchased by your institution.
|
(Mark one “X” for each row.) |
|||
Owned at the end of FY 2007 |
Yes |
No |
|
|
a. To your institution’s ISP |
|
|
||
b. Between your institution’s buildings |
|
|
||
|
|
|
||
To be acquired during FY 2008 |
Yes |
No |
|
|
c. To your institution’s ISP |
|
|
||
d. Between your institution’s buildings |
|
|
Question 10: Speed on your network |
10. At the end of your FY 2007, what was the distribution speed (or backbone speed) that a desktop computer on your network could connect to another computer on your institution’s network? What distribution speed will your institution have at the end of your FY 2008?
|
(Mark one “X” for each column.) |
|||
Speed |
At end of FY 2007 |
Estimated at end of FY 2008 |
|
|
a. Less than 1.6 megabits/second |
|
|
||
b. 1.6 to 9 megabits/second |
|
|
||
c. 10 megabits/second |
|
|
||
d. 11 to 45 megabits/second |
|
|
||
e. 46 to 99 megabits/second |
|
|
||
f. 100 megabits/second |
|
|
||
g. 101 to 155 megabits/second |
|
|
||
h. 156 to 622 megabits/second |
|
|
||
i. 623 to 999 megabits/second |
|
|
||
j. 1 to 2.4 gigabits/second |
|
|
||
k. 2.5 to 9 gigabits/second |
|
|
||
l. 10 gigabits/second |
|
|
||
m. More than 10 gigabits/second |
|
|
||
n. Other (Please specify.)
|
|
|
Question 11: Wireless connections
11. At
the end of your FY 2007, what percentage, if any, of your
institution’s building area was covered by wireless
capabilities for network access? What percentage do you estimate
will have wireless access at the end of your
FY 2008?
Building area refers to the sum of floor by floor calculations of square footage.
Please do not include rogue wireless access points.
|
Wireless
coverage
(Mark one “X” for each column.) |
|||
Percent of building area |
At end of FY 2007 |
Estimated at end of FY 2008 |
|
|
a. None |
|
|
||
b. 1 to 10 percent |
|
|
||
c. 11 to 20 percent |
|
|
||
d. 21 to 30 percent |
|
|
||
e. 31 to 40 percent |
|
|
||
f. 41 to 50 percent |
|
|
||
g. 51 to 60 percent |
|
|
||
h. 61 to 70 percent |
|
|
||
i. 71 to 80 percent |
|
|
||
j. 81 to 90 percent |
|
|
||
k. 91 to 100 percent |
|
|
Question 12: Architectures for centrally administered high performance computing (HPC) of 1 teraflop or faster
12. At the end of your FY 2007, did your institution provide centrally administered high performance computing (HPC) of 1 teraflop or faster at peak performance for each type of architecture listed below?
Centrally administered HPC is within a distinct organizational unit with a staff and a budget; the unit has a stated mission that includes supporting the HPC needs of faculty and researchers.
If some of your high performance computing systems are slower than 1 teraflop and some are faster, please report only the systems that are 1 teraflop or faster. For example, if you have 2 clusters of ½ teraflop and 1 cluster of 1 teraflop, report information for the 1 teraflop system. Or, if you have 3 clusters of ½ teraflop each, then you would report that you have no high performance computing with a cluster architecture.
|
Had at end of FY 2007 |
||
|
(Mark one “X” for each row.) |
||
Centrally administered HPC architectures |
Yes |
No |
Uncertain |
a. Cluster
This
architecture uses multiple commodity systems with an |
|
|
|
b. Massively parallel processors (MPP)
This
architecture uses multiple processors within a single |
|
|
|
c. Symmetric multiprocessors (SMP)
This
architecture uses multiple processors sharing the same |
|
|
|
d. Parallel vector processors (PVP)
This
architecture uses multiple vector processors sharing the |
|
|
|
e. Experimental/Emerging architecture (Please describe.)
This
architecture uses technologies not currently in common
|
|
|
|
f. Special purpose architecture (Please describe.)
This
custom-designed architecture uses established technology
|
|
|
|
g. Other architecture (Please describe.)
|
|
|
Question 13: HPC centrally administered resources
13. In Question 12 (a-g), did you report having any centrally administered HPC of 1 teraflop or faster at the end of your FY 2007?
Yes (Check this box and continue with Question 14)
|
No (Check this box and go to Question 37) |
Question 14: Centrally administered clusters of 1 teraflop or faster
14. In Question 12 (a), did you report having any centrally administered clusters for HPC at the end of your FY 2007?
Yes (Check this box and continue with Question 15)
|
No (Check this box and go to Question 21) |
Question 15: Centrally administered single-core clusters
15. At the end of your FY 2007, how many single-core computing clusters of each size listed below did your institution provide at a speed of 1 teraflop or faster? Include only clusters that are centrally administered.
A computing cluster uses multiple commodity systems with an Ethernet based or high performance interconnect network to perform as a single system.
If your institution did not administer any such clusters,
check this box and go to Question 16
Size |
Number of single-core clusters |
a. 128 nodes or less ______ |
|
b. 129 to 512 nodes ______ |
|
c. 513 to 1,024 nodes ______ |
|
d. 1,025 to 2,048 nodes ______ |
|
e. 2,049 or more nodes (Please specify.) ______
|
Question 16: Centrally administered dual-core clusters
16. At the end of your FY 2007, how many dual-core computing clusters of each size listed below did your institution provide at a speed of 1 teraflop or faster? Include only clusters that are centrally administered.
A computing cluster uses multiple commodity systems with an Ethernet based or high performance interconnect network to perform as a single system.
If your institution did not administer any such clusters,
check this box and go to Question 17
Size |
Number of dual-core clusters |
a. 128 nodes or less ______ |
|
b. 129 to 512 nodes ______ |
|
c. 513 to 1,024 nodes ______ |
|
d. 1,025 to 2,048 nodes ______ |
|
e. 2,049 or more nodes (Please specify.) ______
|
Question 17: Centrally administered quad-core clusters
17. At the end of your FY 2007, how many quad-core computing clusters of each size listed below did your institution provide at a speed of 1 teraflop or faster? Include only clusters that are centrally administered.
A computing cluster uses multiple commodity systems with an Ethernet based or high performance interconnect network to perform as a single system.
If your institution did not administer any such clusters,
check this box and go to Question 18
Size |
Number of quad-core clusters |
a. 128 nodes or less ______ |
|
b. 129 to 512 nodes ______ |
|
c. 513 to 1,024 nodes ______ |
|
d. 1,025 to 2,048 nodes ______ |
|
e. 2,049 or more nodes (Please specify.) ______
|
Question 18: Centrally administered 8-core clusters
18. At the end of your FY 2007, how many 8-core computing clusters of each size listed below did your institution provide at a speed of 1 teraflop or faster? Include only clusters that are centrally administered.
A computing cluster uses multiple commodity systems with an Ethernet based or high performance interconnect network to perform as a single system.
If your institution did not administer any such clusters,
check this box and go to Question 19
Size |
Number
of |
a. 128 nodes or less ______ |
|
b. 129 to 512 nodes ______ |
|
c. 513 to 1,024 nodes ______ |
|
d. 1,025 to 2,048 nodes ______ |
|
e. 2,049 or more nodes (Please specify.) ______
|
Question 19: Clarifications on HPC clusters
19. Please provide any clarifications you may wish to make about your responses to Questions 15 through 18 concerning HPC clusters centrally administered by your institution.
Question 20: Peak performance of clusters of 1 teraflop or faster
20. At the end of your FY 2007, what was the peak theoretical performance of a) your fastest computing cluster of 1 teraflop or faster, and b) all your computing clusters of 1 teraflop or faster (including the fastest one)? Include only clusters that are centrally administered.
A computing cluster uses multiple commodity systems with an Ethernet based or high performance interconnect network to perform as a single system.
If you have only one cluster that is 1 teraflop or faster, report the same number for rows a and b.
|
Number of teraflops |
a. Fastest cluster of 1 teraflop or faster ______ |
|
b. All
computing clusters of 1 teraflop or more |
Question 21: Centrally administered MPP systems of 1 teraflop or faster
21. At the end of your FY 2007, how many MPP systems of 1 teraflop or faster did your institution administer? Include only systems that are centrally administered.
Massively parallel processing (MPP) systems use multiple processors within a single system with a high performance interconnect network. Each processor uses its own memory and operating system.
If
some of your MPP systems for high performance computing are slower
than 1 teraflop and some are faster, please report only the
systems that are 1 teraflop or faster. For example, if you have one
MPP system at ½ teraflop
and another at 1½
teraflops, report only the one at 1½ teraflops.
If your institution did not administer any such systems,
check this box and go to Question 23
Number of MPP systems of 1 teraflop or faster ___________
Question 22: Peak performance of MPP systems of 1 teraflop or faster
22. At the end of your FY 2007, what was the peak theoretical performance of a) your fastest MPP system of 1 teraflop or faster, and b) all your MPP systems of 1 teraflop or faster (including the fastest one)? Include only systems that are centrally administered.
Massively parallel processing (MPP) systems use multiple processors within a single system with a high performance interconnect network. Each processor uses its own memory and operating system.
If you have only one system that is 1 teraflop or faster, report the same number for rows a and b.
|
Number of teraflops |
a. Fastest MPP system of 1 teraflop or faster ______ |
|
b. All
MPP systems of 1 teraflop or more |
Question 23: Centrally administered SMP systems of 1 teraflop or faster
23. At the end of your FY 2007, how many SMP systems of 1 teraflop or faster did your institution administer? Include only systems that are centrally administered.
Symmetric multiprocessing (SMP) systems use multiple processors sharing the same memory and operating system to simultaneously work on individual pieces of a program.
If some of your SMP systems for high performance computing are slower than 1 teraflop and some are faster, please report only the systems that are 1 teraflop or faster. For example, if you have one SMP system at ½ teraflop and another at 1½ teraflops, report only the one at 1½ teraflops.
If your institution did not administer any such systems,
check this box and go to Question 25
Number of SMP systems of 1 teraflop or faster ___________
Question 24: Peak performance of SMP systems of 1 teraflop or faster
24. At the end of your FY 2007, what was the peak theoretical performance of a) your fastest SMP system of 1 teraflop or faster, and b) all your SMP systems of 1 teraflop or faster (including the fastest one)? Include only systems that are centrally administered.
Symmetric multiprocessing (SMP) systems use multiple processors sharing the same memory and operating system to simultaneously work on individual pieces of a program.
If you have only one system that is 1 teraflop or faster, report the same number for rows a and b.
|
Number of teraflops |
a. Fastest SMP system of 1 teraflop or faster ______ |
|
b. All
SMP systems of 1 teraflop or faster |
Question 25: Centrally administered PVP systems of 1 teraflop or faster
25. At the end of your FY 2007, how many PVP systems of 1 teraflop or faster did your institution administer? Include only systems that are centrally administered.
Parallel vector processing (PVP) systems use multiple vector processors sharing the same memory and operating system to simultaneously work on individual pieces of a program.
If some of your PVP systems for high performance computing are slower than 1 teraflop and some are faster, please report only the systems that are 1 teraflop or faster. For example, if you have one PVP system at ½ teraflop and another at 1½ teraflops, report only the one at 1½ teraflops.
If your institution did not administer any such systems,
check this box and go to Question 27
Number of PVP systems of 1 teraflop or faster ___________
Question 26: Total peak performance of PVP systems of 1 teraflop or faster
26. At the end of your FY 2007, what was the total peak theoretical performance of all your PVP systems of 1 teraflop or faster? Include only systems that are centrally administered.
Parallel vector processing (PVP) systems use multiple vector processors sharing the same memory and operating system to simultaneously work on individual pieces of a program.
|
Number of teraflops |
All PVP systems of 1 teraflop or faster ______ |
Question 27: HPC used for administrative functions
27. At the end of your FY 2007, were any of the following HPC architectures used for administrative functions (that is, for the business activities of your institution)?
|
Used for administrative functions |
|||
Architectures |
(Mark one “X” for each row.) |
|||
Yes |
No |
Uncertain |
Does not apply* |
|
a. Clusters |
|
|
|
|
b. Massively parallel processors |
|
|
|
|
c. Symmetric multiprocessors |
|
|
|
|
d. Parallel vector processors |
|
|
|
|
* Does
not apply because none of our centrally |
Question 28: Centrally administered experimental/emerging computing systems of 1 teraflop or faster
28. At the end of your FY 2007, how many experimental/emerging computing systems of 1 teraflop or faster did your institution administer? Include only systems that are centrally administered.
Experimental/Emerging computing systems use technologies not currently in common use for HPC systems (e.g., an accelerator-based architecture).
If your institution did not administer any such systems,
check this box and go to Question 29
Number
of experimental/emerging computing systems of
1 teraflop or
faster ___________
Question 29: Centrally administered special purpose computing systems of 1 teraflop or faster
29. At the end of your FY 2007, how many special purpose computing systems of 1 teraflop or faster did your institution administer? Include only systems that are centrally administered.
Special purpose computing systems use a custom-designed architecture using established technology that supports a special purpose system that is dedicated to a single problem.
If your institution did not administer any such systems,
check this box and go to Question 30
Number of special purpose computing systems of 1 teraflop or faster ___________
Question 30: External users of centrally administered HPC
30. During your FY 2007, which types of external users listed below used any of your institution’s centrally administered HPC?
|
Used
HPC during |
||
|
(Mark one “X” for each row.) |
||
Type of external user |
Yes |
No |
Uncertain |
a. Colleges and universities Include public and private academic institutions and systems. |
|
|
|
b. Governments Include local, state, and regional jurisdictions. |
|
|
|
c. Non-profit organizations Include
legal entities chartered to serve the public interest and |
|
|
|
d. Industry Include for-profit companies, either publicly or privately held. |
|
|
|
e. Other (Please describe.)
|
|
|
Question 31: Usable online storage for centrally administered HPC of 1 teraflop or faster
31. At the end of your FY 2007, what was the total usable online storage available for centrally administered HPC?
Usable storage is the amount of space for data storage that is available for use after the space overhead required by file systems and applicable RAID (redundant array of independent disks) configurations is removed.
Online storage includes all storage providing immediate access for files and data from your HPC systems (of at least 1 teraflop). Storage can be either locally available to specific HPC systems or made available via the network. For example, storage may be available via SAN (storage area network) or NAS (network attached storage) environments.
|
(Mark one “X.”) |
a. Less than 1 terabyte |
|
b. 1 to 5 terabytes |
|
c. 6 to 10 terabytes |
|
d. 11 to 25 terabytes |
|
e. 26 to 50 terabytes |
|
f. 51 to 100 terabytes |
|
g. 101 to 250 terabytes |
|
h. 251 to 500 terabytes |
|
i. 501 to 1,000 terabytes |
|
j. 1,001 or more terabytes (Please specify.)
|
|
k. Uncertain |
Question 32: Usable shared storage for centrally administered HPC of 1 teraflop or faster
32. At the end of your FY 2007, how much of the usable online storage reported in Question 31 was shared storage?
Usable storage is the amount of space for data storage that is available for use after the space overhead required by file systems and applicable RAID (redundant array of independent disks) configurations is removed.
Online storage includes all storage providing immediate access for files and data from your HPC systems (of at least 1 teraflop). Storage can be either locally available to specific HPC systems or made available via the network. For example, storage may be available via SAN (storage area network) or NAS (network attached storage) environments.
Shared storage includes the portion of online storage that is available simultaneously to multiple HPC systems (of at least 1 teraflop) via a network making use of SAN, NAS, file system mounting, or similar technologies.
|
(Mark one “X.”) |
a. Less than 1 terabyte |
|
b. 1 to 5 terabytes |
|
c. 6 to 10 terabytes |
|
d. 11 to 25 terabytes |
|
e. 26 to 50 terabytes |
|
f. 51 to 100 terabytes |
|
g. 101 to 250 terabytes |
|
h. 251 to 500 terabytes |
|
i. 501 to 1,000 terabytes |
|
j. 1,001 or more terabytes (Please specify.)
|
|
k. Uncertain |
Question 33: Usable online storage for HPC available for administrative functions
33. At the end of your FY 2007, was any of the usable online storage reported in Question 31 used for administrative functions (that is, for the business activities of your institution)?
|
(Mark one “X.”) |
a. Yes |
|
b. No |
|
c. Uncertain |
Question 34: Archival storage for centrally administered HPC of 1 teraflop or faster
34. At the end of your FY 2007, what was the total archival storage available specifically for centrally administered HPC? Do not include backup storage.
Archival storage is off-line, typically long-term storage for files and data that does not support immediate access from your HPC resources.
|
(Mark one “X.”) |
a. None |
|
b. Less than 100 terabytes |
|
c. 101 to 250 terabytes |
|
d. 251 to 500 terabytes |
|
e. 501 to 750 terabytes |
|
f. 751 to 1,000 terabytes |
|
g. 1,001 to 5,000 terabytes |
|
h. 5,001 to 10,000 terabytes |
|
i. 10,001 or more terabytes (Please specify.)
|
|
j. Uncertain |
Question 35: Archival storage for HPC available for administrative functions
35. At the end of your FY 2007, was any of the archival storage reported in Question 34 used for administrative functions (that is, for the business activities of your institution)?
|
(Mark one “X.”) |
a. Yes |
|
b. No |
|
c. Uncertain |
Question 36: Conditioned machine room space for centrally administered HPC of 1 teraflop or faster
36. At the end of your FY 2007, what was the total net assignable square feet (NASF) of conditioned machine room space for all centrally administered HPC at your institution?
Net assignable square feet (NASF) is the sum of all areas on all floors of a building assigned to, or available to be assigned to, an occupant for a specific use, such as research or instruction. NASF is measured from the inside faces of walls.
Conditioned machine rooms are specifically designed to house computing systems and are engineered to keep processors at a cool temperature so they can run efficiently and effectively.
Conditioned machine room space __________ NASF
Question 37: Comments
37. Please add any comments for Part 2 below.
|
Thank you. This is the end of Part 2. Please submit this part of the survey according to the arrangements you made with your institutional coordinator (named on the label on the front cover of the survey questionnaire).
File Type | application/msword |
File Title | Survey of Science and Engineering |
Author | Timothy Smith |
File Modified | 2007-07-26 |
File Created | 2007-07-26 |