Gerstein Lab Computing in HPC

From GersteinInfo

(Difference between revisions)
Jump to: navigation, search
(OpenStack)
 
(6 intermediate revisions not shown)
Line 1: Line 1:
 +
= General Usage Notes =
 +
 +
== Getting Access ==
 +
 +
Fill this form up
 +
http://research.computing.yale.edu/support/hpc/account-request
 +
 +
== Training ==
 +
https://research.computing.yale.edu/training/
 +
 +
== Transfers ==
 +
https://docs.ycrc.yale.edu/clusters-at-yale/data/transfer/
 +
 +
== Controlling Access by ACL ==
 +
 +
Create a directory. Set the permission to chmod 700.
 +
Allow access to 'directory' by user 'netid':
 +
 +
$ setfacl -m u:<netid>:rx <directory>
 +
 +
Use getfacl to examine ACLs:
 +
 +
$ getfacl <directory>
 +
= Current Hardware =
= Current Hardware =
== Compute ==
== Compute ==

Latest revision as of 20:31, 15 January 2020

Contents

General Usage Notes

Getting Access

Fill this form up http://research.computing.yale.edu/support/hpc/account-request

Training

https://research.computing.yale.edu/training/

Transfers

https://docs.ycrc.yale.edu/clusters-at-yale/data/transfer/

Controlling Access by ACL

Create a directory. Set the permission to chmod 700. Allow access to 'directory' by user 'netid':

$ setfacl -m u:<netid>:rx <directory>

Use getfacl to examine ACLs:

$ getfacl <directory>

Current Hardware

Compute

Grace

33 nodes, 672 cores purchased in 2015

Farnam

2 nodes, 64 cores purchased in 2012 (will be shutdown in 2019)

11 nodes, 308 cores purchased in 2018

1 large memory node (1.5TB RAM), 40 cores purchased in 2018

3 GPU nodes (2xNVIDIA K80), 60 cores purchased in 2016

2 GPU nodes (2xNVIDIA P100), 56 cores purchased in 2018

1 GPU node(4xNVIDIA TITAN V), 8 cores purchased in 2018

Total: 20 nodes, 536 cores

  • 32 nodes were shutdown in September 2018

OpenStack

1 director node, 8 cores

3 controllers nodes, 24 cores

3 ceph nodes, 24 cores

5 compute nodes, 40 cores

Storage

Loomis (mounted on grace)

3 TB default allocation

130 TB purchased in 2014

170 TB purchased in 2015

100 TB purchased in 2016

Total: 403 TB 93% used (27 TB free)

  • 30 TB loan from HPC (ending Jan 2019)

Farnam (mounted on both farnam and grace)

4 TB default allocation

90 TB purchased in 2013 (will retire in July 2019)

276 TB purchased in 2016

757 TB purchased in 2018

Total: 1127 TB 83% used (193 TB free)

  • Due to the limit of storage capacity, the actual will be 1017 TB + 90 TB (loan from HPC). The loan will be taken away in July 2019.

OpenStack

Ceph nodes, 163TB

Compute nodes, 2TB

Director node, 2.2TB

Controller node, 2TB

70 TB with 10GB connection to farnam

Actual CPU Usage

Grace

Grace Shared 258,948 h (equivalent to ~120 cores at 100% utilization)

Grace Dedicated 371,982 h (equivalent to ~175 cores at 100% utilization)

Grace Scavenge 362,890 h (equivalent to ~165 cores at 100% utilization)

Grace Total 993,820 h (~462/672 cores)

Farnam

Farnam Shared 108,789 h (equivalent to ~50 cores at 100% utilization)

Farnam Dedicated 262,642 h (equivalent to ~120 cores at 100% utilization)

Farnam Dedicated - GPU 13,250 h

Farnam Scavenge 29,527 h (equivalent to 13 cores at 100% utilization)

Farnam Total 414,208 h (~183/536 cores)

Personal tools