FY 2011 Survey of Science and Engineering Research Facilities, Part 1: Research Space

Survey of Science and Engineering Research Facilities

2011 Facilities_Paper Questionnaire_Part 2 5 19 11

FY 2011 Survey of Science and Engineering Research Facilities, Part 1: Research Space

OMB: 3145-0101

Document [docx]
Download: docx | pdf





National Science Foundation


Part 2: Computing and Networking Capacity
(for research and instructional activities)

FY 2011 Survey of Science and
Engineering Research Facilities






Who should be contacted if clarification of Part 2 answers is necessary?


Contact 1 Contact 2

Name: __________________________________ __________________________________

Title/position: __________________________________ __________________________________

Telephone: __________________________________ __________________________________

Email address: __________________________________ __________________________________


Please complete the questionnaire and submit it according to the arrangements you made with your institutional coordinator named in the label above. You may complete this questionnaire online at www.facilitiessurvey.org. You will need to click on “Part 2” and then enter the survey ID and password printed on the label above.


If you have a question, please contact [Name] via e-mail at [Contractor email box] or call

1-888-XXX-XXXX. The survey director at the National Science Foundation is Dr. Leslie Christovich.


If you do not have exact figures for any part of this questionnaire, please provide estimates.


Thank you for your participation.


OMB #3145-0101



Changes from previous survey cycle

  • Question 1 on total bandwidth has been modified to include bandwidth to the commodity internet (Internet1), Internet2, the National LambdaRail (NLR), and federal government research networks.

  • Question 4 on federal government research networks has been added.

  • Question 11 on centrally administered high-performance computing (HPC) architectures has been modified to include graphics processing unit (GPU) computing.

  • Three questions from the last survey cycle have been deleted (question numbers shown below refer to those appearing in the FY 2009 survey):

  • Commodity internet bandwidth (Question 4)

  • High performance network connections (Question 6)

  • Conditioned machine room space for centrally administered HPC (Question 23)




Question 1: Total bandwidth

1. At the end of your FY 2011, what was your institution’s total bandwidth including the commodity internet (Internet1), Internet2, the National LambdaRail (NLR), and federal government research networks? What is your estimate of this total for your institution at the end of your FY 2012?

Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second.

Commodity internet (Internet1) is the general public, multiuse network often called the “Internet.”

Internet2 is a high-performance hybrid optical packet network. The network was designed to provide next-generation production services as well as a platform for the development of new networking ideas and protocols.

National LambdaRail (NLR) is an advanced optical network infrastructure for research and education. NLR enables cutting-edge exploration in the sciences and network research.

Federal government research networks are high performance networks which provide access to federal research facilities and computing resources (e.g. Department of Energy’s ESnet, NASA’s NREN).

Please do not include:

  • Redundant connections, which are not normally active but available if a failure occurs with the active connection; or

  • Burstable bandwidth.


Please include networking capacity for research, instruction, and residence halls.

Speed

Total bandwidth

(Mark one “X” for each column.)

At end of
FY 2011

Estimated at
end of
FY 2012

a. 10 megabits/second or less Shape2

Shape3

b. 11 to 45 megabits/second Shape4

Shape5

c. 46 to 99 megabits/second Shape6

Shape7

d. 100 megabits/second Shape8

Shape9

e. 101 to 155 megabits/second Shape10

Shape11

f. 156 to 622 megabits/second Shape12

Shape13

g. 623 to 999 megabits/second Shape14

Shape15

h. 1 to 2.4 gigabits/second Shape16

Shape17

i. 2.5 to 9 gigabits/second Shape18

Shape19

j. 10 gigabits/second Shape20

Shape21

k. 10.1 to 20 gigabits/second Shape22

Shape23

l. More than 20 gigabits/second Shape24

Shape25

m. Other (Please specify.) Shape26

Shape27

________________________________________________


________________________________________________


Question 2: Internet2 bandwidth

Questions 2–10 include networking capacity for: research, instruction, and residence halls.


2. At the end of your FY 2011, what was your institution’s bandwidth to Internet2? What is your estimate of the bandwidth to Internet2 at the end of your FY 2012?

Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second.

Internet2 is a high-performance hybrid optical packet network. The network was designed to provide next-generation production services as well as a platform for the development of new networking ideas and protocols.

Please do not include redundant connections. A redundant connection is not normally active but is available if a failure occurs with the active connection.

Speed

Bandwidth for Internet2

(Mark one “X” for each column.)

At end of
FY 2011

Estimated at
end of
FY 2012

a. No bandwidth to Internet2 Shape28

Shape29

b. 10 megabits/second or less Shape30

Shape31

c. 11 to 45 megabits/second Shape32

Shape33

d. 46 to 99 megabits/second Shape34

Shape35

e. 100 megabits/second Shape36

Shape37

f. 101 to 155 megabits/second Shape38

Shape39

g. 156 to 622 megabits/second Shape40

Shape41

h. 623 to 999 megabits/second Shape42

Shape43

i. 1 to 2.4 gigabits/second Shape44

Shape45

j. 2.5 to 9 gigabits/second Shape46

Shape47

k. 10 gigabits/second Shape48

Shape49

l. 10.1 to 20 gigabits/second Shape50

Shape51

m. More than 20 gigabits/second Shape52

Shape53

n. Other (Please specify.) Shape54

Shape55

________________________________________________


________________________________________________



Question 3: National LambdaRail (NLR) bandwidth

3. At the end of your FY 2011, what was your institution’s bandwidth to National LambdaRail (NLR)? What is your estimate of the bandwidth to National LambdaRail at the end of your FY 2012?

Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second.

National LambdaRail (NLR) is an advanced optical network infrastructure for research and education. NLR enables cutting-edge exploration in the sciences and network research.

Please do not include redundant connections. A redundant connection is not normally active but is available if a failure occurs with the active connection.

Speed

Bandwidth for National
LambdaRail

(Mark one “X” for each column.)

At end of
FY 2011

Estimated at
end of
FY 2012

a. No bandwidth to National LambdaRail Shape56

Shape57

b. 10 megabits/second or less Shape58

Shape59

c. 11 to 45 megabits/second Shape60

Shape61

d. 46 to 99 megabits/second Shape62

Shape63

e. 100 megabits/second Shape64

Shape65

f. 101 to 155 megabits/second Shape66

Shape67

g. 156 to 622 megabits/second Shape68

Shape69

h. 623 to 999 megabits/second Shape70

Shape71

i. 1 to 2.4 gigabits/second Shape72

Shape73

j. 2.5 to 9 gigabits/second Shape74

Shape75

k. 10 gigabits/second Shape76

Shape77

l. 10.1 to 20 gigabits/second Shape78

Shape79

m. More than 20 gigabits/second Shape80

Shape81

n. Other (Please specify.) Shape82

Shape83

________________________________________________


________________________________________________





Question 4: Federal government research network connections

4. At the end of your FY 2011, did your institution have connections to any federal government research networks? Do you expect to have connections to any of these networks at the end of your FY 2012?

Federal government research networks are high performance networks which provide access to federal research resources (e.g. Department of Energy’s ESnet, NASA’s NREN).


(Mark one “X” for each row.)

Fiscal year

Yes

No

a. Connections at the end of FY 2011 Shape84

Shape85

b. Connections at the end of FY 2012 Shape86

Shape87

Question 5: Bandwidth through consortia

5. At the end of your FY 2011, did your institution obtain any of its bandwidth through a consortium? Do you expect to obtain bandwidth through a consortium at the end of your FY 2012?

A consortium is a collaboration of any combination of educational institutions (e.g., university system, regional collaboration), state and local agencies, network infrastructure operators (e.g., Internet2), vendors, health care organizations, or non-profit organizations with the purpose of coordinating and facilitating networking activities.

Bandwidth is the amount of data that can be transmitted in a given amount of time, measured in bits per second.


(Mark one “X” for each row.)

Fiscal year

Yes

No

a. Bandwidth through consortia at the end of FY 2011 Shape88

Shape89

b. Bandwidth through consortia at the end of FY 2012 Shape90

Shape91



Please provide the names of all consortia through which you expect to obtain bandwidth at the end of your FY 2012.



________________________________________________________________________________

________________________________________________________________________________

________________________________________________________________________________

________________________________________________________________________________


Question 6: Desktop port connections

6. At the end of your FY 2011, what percentage of your institution’s desktop ports had hardwire connections at each of the speeds listed below? What percentage do you estimate will be at these speeds at the end of your FY 2012? If your answer is between 0 and 1 percent, please round to 1 percent.

Please report on the capacity of the ports themselves and not the speed of the workstations connected to them. Also, do not include servers when determining your responses.


Percentage of desktop ports

Speed of connection

At end of
FY 2011

Estimated at end
of FY 2012

a. 10 megabits/second or less _________ %

_________ %

b. 100 megabits/second _________ %

_________ %

c. 1 gigabit/second _________ %

_________ %

d. 10 gigabits/second or more _________ %

_________ %

e. Other (Please specify.) _________ %

_________ %

___________________________________________


Total 100%

100%



Question 7: Dark fiber

7. At the end of your FY 2011, did your institution own any dark fiber to your institution’s internet service provider (ISP) or between your institution’s buildings? Do you plan to acquire any dark fiber to your ISP or between your institution’s buildings during your FY 2012?

Dark fiber is fiber-optic cable that has already been laid but is not being used. Include only fiber that was dark
(i.e., unlit) when it was purchased by your institution.


(Mark one “X” for each row.)

Owned at the end of FY 2011

Yes

No

a. To your institution’s ISP Shape92

Shape93

b. Between your institution’s buildings Shape94

Shape95



To be acquired during FY 2012

Yes

No

c. To your institution’s ISP Shape96

Shape97

d. Between your institution’s buildings Shape98

Shape99


Question 8: Speed on your network

8. At the end of your FY 2011, what was the distribution speed (or backbone speed) that a desktop computer on your network could connect to another computer on your institution’s network? What distribution speed will your
institution have at the end of your FY 2012?


(Mark one “X” for each column.)

Speed

At end of
FY 2011

Estimated at
end of
FY 2012

a. 10 megabits/second or less Shape100

Shape101

b. 11 to 45 megabits/second Shape102

Shape103

c. 46 to 99 megabits/second Shape104

Shape105

d. 100 megabits/second Shape106

Shape107

e. 101 to 155 megabits/second Shape108

Shape109

f. 156 to 622 megabits/second Shape110

Shape111

g. 623 to 999 megabits/second Shape112

Shape113

h. 1 to 2.4 gigabits/second Shape114

Shape115

i. 2.5 to 9 gigabits/second Shape116

Shape117

j. 10 gigabits/second Shape118

Shape119

k. 10.1 to 20 gigabits/second Shape120

Shape121

l. More than 20 gigabits/second Shape122

Shape123

m. Other (Please specify.) Shape124

Shape125

________________________________________________


________________________________________________


Question 9: Wireless connections

9. At the end of your FY 2011, what percentage, if any, of your institution’s building area was covered by wireless capabilities for network access? What percentage do you estimate will have wireless access at the end of your
FY 2012?

Building area refers to the sum of floor by floor calculations of square footage.

Please do not include rogue wireless access points.

Percent of building area

Wireless coverage
for network access

(Mark one “X” for each column.)

At end of
FY 2011

Estimated at
end of
FY 2012

a. None Shape126

Shape127

b. 1 to 10 percent Shape128

Shape129

c. 11 to 20 percent Shape130

Shape131

d. 21 to 30 percent Shape132

Shape133

e. 31 to 40 percent Shape134

Shape135

f. 41 to 50 percent Shape136

Shape137

g. 51 to 60 percent Shape138

Shape139

h. 61 to 70 percent Shape140

Shape141

i. 71 to 80 percent Shape142

Shape143

j. 81 to 90 percent Shape144

Shape145

k. 91 to 100 percent Shape146

Shape147



Question 10: Comments on networking

10. Please add any comments that you wish to make on your institution’s networking below.

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

Question 11: Architectures for centrally administered high-performance computing (HPC)

of 1 teraflop or faster

11. At the end of your FY 2011, did your institution provide centrally administered high-performance computing (HPC) of 1 teraflop or faster at peak performance for each type of architecture listed below?

Centrally administered HPC is located within a distinct organizational unit with a staff and a budget and is generally available to the campus community. The unit has a stated mission that includes supporting HPC needs of faculty and researchers.

If some of your high-performance computing systems are slower than 1 teraflop and some are faster, please report only the systems that are 1 teraflop or faster.

Had at end of FY 2011

(Mark one “X” for each row.)

Centrally administered HPC architectures Yes No

  1. Cluster Shape148 Shape149
    This architecture uses multiple commodity systems with an
    Ethernet based or high-performance interconnect network to
    perform as a single system.

  2. Massively parallel processors (MPP) Shape150 Shape151
    This architecture uses multiple processors within a single
    system with a high-performance interconnect network. Each
    processor uses its own memory and operating system.

  3. Symmetric multiprocessors (SMP) Shape152 Shape153
    This architecture uses multiple processors sharing the same
    memory and operating system to simultaneously work on
    individual pieces of a program.

  4. Parallel vector processors (PVP) Shape154 Shape155
    This architecture uses multiple vector processors sharing the
    same memory and operating system to simultaneously work
    on individual pieces of a program.

  5. Graphics Processing Unit (GPU) Computing Shape156 Shape157
    This architecture uses CPU processors to process the sequential
    part of a problem and GPU processors to accelerate the
    computationally intensive part.

  6. Experimental/Emerging architecture (Please describe.) Shape158 Shape159
    This architecture uses technologies not currently in common
    use for HPC systems (e.g., an accelerator-based architecture).

___________________________________________

  1. Special purpose architecture (Please describe.) Shape160 Shape161
    This custom-designed architecture uses established technology
    that supports a special purpose system that is dedicated to a
    single type of problem.

___________________________________________

  1. Other architecture (Please describe.) Shape162 Shape163

___________________________________________

Question 12: HPC centrally administered resources

12. In Question 11 (a–h), did you report having any centrally administered high-performance computing of 1 teraflop or faster at the end of your FY 2011?

Yes (Check this box and go to Question 13) Shape164

No (Check this box and go to Question 22) Shape165



Question 13: Centrally administered clusters of 1 teraflop or faster

13. At the end of your FY 2011, what was the peak theoretical performance of (a) your fastest computing cluster of
1 teraflop or faster, and (b) all your computing clusters of 1 teraflop or faster (including the fastest one)? Include only clusters that are centrally administered.

A computing cluster uses multiple commodity systems with an Ethernet based or high-performance interconnect network to perform as a single system.

If some of your cluster systems for high-performance computing are slower than 1 teraflop and some are faster, please report only the systems that are 1 teraflop or faster.

If you have only one cluster that is 1 teraflop or faster, report the same number for rows a and b.

If your institution did not administer any such clusters,
check this box and go to Question 14 Shape166


Number of teraflops

a. Fastest cluster of 1 teraflop or faster __________

b. All computing clusters of 1 teraflop or more
(including the fastest cluster)
__________



Question 14: Centrally administered MPP of 1 teraflop or faster

14. At the end of your FY 2011, what was the peak theoretical performance of (a) your fastest MPP system of 1 teraflop or faster, and (b) all your MPP systems of 1 teraflop or faster (including the fastest one)? Include only MPP systems that are centrally administered.

Massively parallel processing (MPP) systems use multiple processors within a single system with a high-performance interconnect network. Each processor uses its own memory and operating system.

If some of your MPP systems for high-performance computing are slower than 1 teraflop and some are faster, please report only the systems that are 1 teraflop or faster.

If you have only one system that is 1 teraflop or faster, report the same number for rows a and b.

If your institution did not administer any such
MPP systems, check this box and go to Question 15 Shape167


Number of teraflops

a. Fastest MPP system of 1 teraflop or faster __________

b. All MPP systems of 1 teraflop or more
(including the fastest system)
__________



Question 15: Centrally administered SMP of 1 teraflop or faster

15. At the end of your FY 2011, what was the peak theoretical performance of (a) your fastest SMP system of 1 teraflop or faster, and (b) all your SMP systems of 1 teraflop or faster (including the fastest one)? Include only SMP systems that are centrally administered.

Symmetric multiprocessing (SMP) systems use multiple processors sharing the same memory and operating system to simultaneously work on individual pieces of a program.

If some of your SMP systems for high-performance computing are slower than 1 teraflop and some are faster, please report only the systems that are 1 teraflop or faster.

If you have only one system that is 1 teraflop or faster, report the same number for rows a and b.

If your institution did not administer any such
SMP systems, check this box and go to Question 16 Shape168


Number of teraflops

a. Fastest SMP system of 1 teraflop or faster __________

b. All SMP systems of 1 teraflop or more
(including the fastest system)
__________




Question 16: Centrally administered experimental/emerging computing systems of 1 teraflop

or faster

16. At the end of your FY 2011, how many experimental/emerging computing systems of 1 teraflop or faster did your institution administer? Include only systems that are centrally administered.

Experimental/Emerging computing systems use technologies not currently in common use for HPC systems (e.g., an accelerator-based architecture).

If your institution did not administer any such systems,
check this box and go to Question 17 Shape169

Number of systems of 1 teraflop or faster __________ systems



Question 17: Centrally administered special purpose computing systems of 1 teraflop or

faster

17. At the end of your FY 2011, how many special purpose computing systems of 1 teraflop or faster did your institution administer? Include only systems that are centrally administered.

Special purpose computing systems use a custom-designed architecture using established technology that supports a special purpose system that is dedicated to a single problem.

If your institution did not administer any such systems,
check this box and go to Question 18 Shape170

Number of systems of 1 teraflop or faster __________ systems



Question 18: External users of centrally administered HPC of 1 teraflop or faster

18. During your FY 2011, which types of external users listed below used any of your institution’s centrally administered HPC of 1 teraflop or faster?

Used your HPC during
FY 2011

(Mark one “X” for each row.)

Type of external user Yes No Uncertain

  1. Colleges and universities Shape171 Shape172 Shape173
    Include public and private academic institutions and systems.

  1. Governments Shape174 Shape175 Shape176
    Include local, state, and regional jurisdictions.

  2. Non-profit organizations Shape177 Shape178 Shape179
    Include legal entities chartered to serve the public interest and
    that are exempt from most federal taxation.

  3. Industry Shape180 Shape181 Shape182
    Include for-profit companies, either publicly or privately held.

  4. Other (Please describe.) Shape183 Shape184 Shape185

___________________________________________

___________________________________________



Question 19: Usable online storage for centrally administered HPC of 1 teraflop or faster

19. At the end of your FY 2011, what was the total usable online storage available for centrally administered HPC of 1 teraflop or faster?

Usable storage is the amount of space for data storage that is available for use after the space overhead required by
file systems and applicable RAID (redundant array of independent disks) configurations is removed.

Online storage includes all storage providing immediate access for files and data from your HPC systems (of at least
1 teraflop). Storage can be either locally available to specific HPC systems or made available via the network. For example, storage may be available via SAN (storage area network) or NAS (network attached storage) environments.

(Mark one “X”)

a. None Shape186

b. Less than 1 terabyte Shape187

c. 1 to 5 terabytes Shape188

d. 6 to 10 terabytes Shape189

e. 11 to 25 terabytes Shape190

f. 26 to 50 terabytes Shape191

g. 51 to 100 terabytes Shape192

h. 101 to 250 terabytes Shape193

i. 251 to 500 terabytes Shape194

j. 501 to 1,000 terabytes Shape195

k. 1,001 or more terabytes (Please specify.) Shape196

________________________________________________



Question 20: Usable shared storage for centrally administered HPC of 1 teraflop or faster

20. At the end of your FY 2011, how much of the usable online storage reported in Question 19 was shared storage?

Usable storage is the amount of space for data storage that is available for use after the space overhead required by
file systems and applicable RAID (redundant array of independent disks) configurations is removed.

Online storage includes all storage providing immediate access for files and data from your HPC systems (of at least
1 teraflop). Storage can be either locally available to specific HPC systems or made available via the network. For example, storage may be available via SAN (storage area network) or NAS (network attached storage) environments.

Shared storage includes the portion of online storage that is available simultaneously to multiple HPC systems (of at least 1 teraflop) via a network making use of SAN, NAS, file system mounting, or similar technologies.

(Mark one “X”)

a. None Shape197

b. Less than 1 terabyte Shape198

c. 1 to 5 terabytes Shape199

d. 6 to 10 terabytes Shape200

e. 11 to 25 terabytes Shape201

f. 26 to 50 terabytes Shape202

g. 51 to 100 terabytes Shape203

h. 101 to 250 terabytes Shape204

i. 251 to 500 terabytes Shape205

j. 501 to 1,000 terabytes Shape206

k. 1,001 or more terabytes (Please specify.) Shape207

________________________________________________



Question 21: Archival storage for centrally administered HPC of 1 teraflop or faster

21. At the end of your FY 2011, what was the total archival storage available specifically for centrally administered HPC of 1 teraflop or faster? Do not include backup storage.

Archival storage can be either on-line or off-line. It is typically long-term storage for files and data and does not support immediate access from your HPC resources.

(Mark one “X”)

a. None Shape208

b. Less than 100 terabytes Shape209

c. 101 to 250 terabytes Shape210

d. 251 to 500 terabytes Shape211

e. 501 to 750 terabytes Shape212

f. 751 to 1,000 terabytes Shape213

g. 1,001 to 5,000 terabytes Shape214

h. 5,001 to 10,000 terabytes Shape215

i. 10,001 or more terabytes (Please specify.) Shape216

________________________________________________




Question 22: Comments on HPC

22. Please add any comments you may wish on your institution’s HPC below.

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________

____________________________________________________________________________________





Thank you. This is the end of Part 2. Please submit this part of the
survey according to the arrangements you made with your institutional coordinator (named on the label on the front cover of the survey questionnaire).




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Title8730.03: 2009 Facilities Survey Part 1
Authormoss_e
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy