Gerstein Lab Computing in HPC
From GersteinInfo
Contents |
Current Hardware
Compute
Grace
33 nodes, 672 cores purchased in 2015
Farnam
2 nodes, 64 cores purchased in 2012 (will be shutdown in 2019)
11 nodes, 308 cores purchased in 2018
1 large memory node (1.5TB RAM), 40 cores purchased in 2018
3 GPU nodes (2xNVIDIA K80), 60 cores purchased in 2016
2 GPU nodes (2xNVIDIA P100), 56 cores purchased in 2018
1 GPU node(4xNVIDIA TITAN V), 8 cores purchased in 2018
Total: 20 nodes, 536 cores
- 32 nodes were shutdown in September 2018
OpenStack
1 director node, 8 cores
= General Usage Notes
3 controllers nodes, 24 cores
3 ceph nodes, 24 cores
5 compute nodes, 40 cores
Storage
Loomis (mounted on grace)
3 TB default allocation
130 TB purchased in 2014
170 TB purchased in 2015
100 TB purchased in 2016
Total: 403 TB 93% used (27 TB free)
- 30 TB loan from HPC (ending Jan 2019)
Farnam (mounted on both farnam and grace)
4 TB default allocation
90 TB purchased in 2013 (will retire in July 2019)
276 TB purchased in 2016
757 TB purchased in 2018
Total: 1127 TB 83% used (193 TB free)
- Due to the limit of storage capacity, the actual will be 1017 TB + 90 TB (loan from HPC). The loan will be taken away in July 2019.
OpenStack
Ceph nodes, 163TB
Compute nodes, 2TB
Director node, 2.2TB
Controller node, 2TB
70 TB with 10GB connection to farnam
Actual CPU Usage
Grace
Grace Shared 258,948 h (equivalent to ~120 cores at 100% utilization)
Grace Dedicated 371,982 h (equivalent to ~175 cores at 100% utilization)
Grace Scavenge 362,890 h (equivalent to ~165 cores at 100% utilization)
Grace Total 993,820 h (~462/672 cores)
Farnam
Farnam Shared 108,789 h (equivalent to ~50 cores at 100% utilization)
Farnam Dedicated 262,642 h (equivalent to ~120 cores at 100% utilization)
Farnam Dedicated - GPU 13,250 h
Farnam Scavenge 29,527 h (equivalent to 13 cores at 100% utilization)
Farnam Total 414,208 h (~183/536 cores)