Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
caviness:resources [2019/06/27 14:47] frey |
caviness:resources [2019/06/28 13:35] (current) frey [Generation 1] |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Caviness: resources ====== | + | ====== Resources ====== |
The design of Caviness is similar to previous community clusters. | The design of Caviness is similar to previous community clusters. | ||
Line 12: | Line 12: | ||
|**CPU**|(2) Intel Xeon E5-2695v4| | |**CPU**|(2) Intel Xeon E5-2695v4| | ||
- | |**Cores**|18 per CPU, 26 per node| | + | |**Cores**|18 per CPU, 36 per node| |
|**Clock rate**|2.10 GHz| | |**Clock rate**|2.10 GHz| | ||
|**CPU cache**|32 KiB L1 data and instruction caches per core; 256 KiB L2 cache per core; 45 MiB L3 cache| | |**CPU cache**|32 KiB L1 data and instruction caches per core; 256 KiB L2 cache per core; 45 MiB L3 cache| | ||
Line 30: | Line 30: | ||
|**Coprocessor**|10|(2) nVidia P100 GPGPU, 12 GiB, PCIe| | |**Coprocessor**|10|(2) nVidia P100 GPGPU, 12 GiB, PCIe| | ||
- | Additionally, two nodes contain local NVMe storage are present to facilitate user testing of enhanced local storage media: | + | Additionally, two nodes with additional local storage are present to facilitate user testing of enhanced local storage media: |
^ ^Qty^Specification^ | ^ ^Qty^Specification^ | ||
- | |**Enhanced storage**|2|6.1 GB RAID0 ''/nvme'' partition, striped across (2) 3.2 TB Micron 9200 NVMe| | + | |**Enhanced storage**|2|6.1 GB ''/nvme'' partition, RAID0 across (2) 3.2 TB Micron 9200 NVMe| |
===== Networking ===== | ===== Networking ===== | ||
Line 39: | Line 39: | ||
There are two private ethernet networks in the cluster. A dedicated 1 Gbps network carries management traffic (remote power control of nodes, console access, etc.). A dedicated 10 Gbps network carries all data traffic (NFS, job scheduling, SSH access) to the nodes. | There are two private ethernet networks in the cluster. A dedicated 1 Gbps network carries management traffic (remote power control of nodes, console access, etc.). A dedicated 10 Gbps network carries all data traffic (NFS, job scheduling, SSH access) to the nodes. | ||
- | A 100 Gbps Intel Omni-path network also connects all nodes. The OPA network carries Lustre filesystem traffic as well as most MPI internode communications. | + | A 100 Gbps Intel Omni-path network also connects all nodes. The OPA network carries Lustre filesystem traffic as well as most MPI internode communications. The network uses a fat tree topology employing six spine switches. Each leaf switch (two per rack) features 12 leaf-to-spine uplink ports and 36 host ports (3:1 oversubscription). |
===== Storage ===== | ===== Storage ===== | ||
Line 118: | Line 118: | ||
The total capacity can be checked using the ''lfs df'' command: | The total capacity can be checked using the ''lfs df'' command: | ||
- | <code bash> | + | <code> |
$ lfs df | $ lfs df | ||
UUID 1K-blocks Used Available Use% Mounted on | UUID 1K-blocks Used Available Use% Mounted on | ||
Line 132: | Line 132: | ||
Note that this command displays both aggregate capacity and the capacity of each OST and MDT (MetaData Target) component of the file system. Users can determine their current occupied Lustre scratch capacity: | Note that this command displays both aggregate capacity and the capacity of each OST and MDT (MetaData Target) component of the file system. Users can determine their current occupied Lustre scratch capacity: | ||
- | <code bash> | + | <code> |
$ lfs quota -u $(id -u) /lustre/scratch | $ lfs quota -u $(id -u) /lustre/scratch | ||
Disk quotas for usr 1001 (uid 1001): | Disk quotas for usr 1001 (uid 1001): | ||
Line 141: | Line 141: | ||
Likewise, capacity associated explicitly with a workgroup can be checked: | Likewise, capacity associated explicitly with a workgroup can be checked: | ||
- | <code bash> | + | <code> |
$ lfs quota -g $(id -g) /lustre/scratch | $ lfs quota -g $(id -g) /lustre/scratch | ||
Disk quotas for grp 1001 (gid 1001): | Disk quotas for grp 1001 (gid 1001): |