Table of Contents

Resources

The design of Caviness is similar to previous community clusters.

Compute

Designed as a multi-generational system, over time Caviness will host a variety of nodes, differing not just in memory sizes and presence of coprocessors but also in processor microarchitecture.

Generation 1

The baseline node specification comprises:

CPU(2) Intel Xeon E5-2695v4
Cores18 per CPU, 36 per node
Clock rate2.10 GHz
CPU cache32 KiB L1 data and instruction caches per core; 256 KiB L2 cache per core; 45 MiB L3 cache
Local storage910 GB /tmp partition on a 960 GB SSD
Network(1) 1 Gbps ethernet port; (1) 100 Gbps Intel Omni-path port

Three RAM sizes are present:

QtySpecification
RAM64128 GiB (8 x 16 GiB) DDR4, 2400 MHz
RAM55256 GiB (8 x 32 GiB) DDR4, 2400 MHz
RAM7512 GiB (16 x 32 GiB) DDR4, 2133 MHz

Some nodes include GPU coprocessors:

QtySpecification
Coprocessor10(2) nVidia P100 GPGPU, 12 GiB, PCIe

Additionally, two nodes with additional local storage are present to facilitate user testing of enhanced local storage media:

QtySpecification
Enhanced storage26.1 GB /nvme partition, RAID0 across (2) 3.2 TB Micron 9200 NVMe

Networking

There are two private ethernet networks in the cluster. A dedicated 1 Gbps network carries management traffic (remote power control of nodes, console access, etc.). A dedicated 10 Gbps network carries all data traffic (NFS, job scheduling, SSH access) to the nodes.

A 100 Gbps Intel Omni-path network also connects all nodes. The OPA network carries Lustre filesystem traffic as well as most MPI internode communications. The network uses a fat tree topology employing six spine switches. Each leaf switch (two per rack) features 12 leaf-to-spine uplink ports and 36 host ports (3:1 oversubscription).

Storage

Each rack of compute equipment added to Caviness is designed to add storage capacity to the cluster:

The addition of OSTs/OSSs increases the aggregate capacity and bandwidth of the /lustre/scratch filesystem. Individual NFS servers provide distinct capacity and bandwidth but do not aggregate with existing capacity or bandwidth — in short, they're just “more space.”

A discussion of each distinct kind of storage available to users is found below. General usage scenarios for each include:

See the software management strategies portion of this site for help setting-up a personal or workgroup software build/install directory.

Home directories

Each user is granted a home directory with a 20 GiB limit (quota). Typically users will build software in their home directory. The relatively low quota often means that users cannot (and should not) submit computational jobs from their home directories. Home directories are mounted at the path /home/<uid_number>, where <uid_number> is a user's Unix UID number (an integer value, use the id command to determine it).

The Bash shell allows you to reference your home directory as ~/ in most commands. For example, ls -al ~/ displays a long listing of the all hidden and normally-visible files and directories inside your home directory.

The home directory is the location of a few important files and directories:

FileDescription
.bashrcCommands executed by any new Bash shell spawned for the user
.bash_profileCommands executed specifically by a new login shell for the user
.bash_historySaved sequence of commands the user has interactively entered at the shell prompt
.bash_uditConfiguration file controlling UD-specific behaviors of the Bash shell
.valet/Directory containing a user's personal VALET package definitions; does not exist by default, should be created by the user if wanted (automatically added to VALET_PATH by UD Bash login scripts)
.zfs/snapshot/Directory containing historical snapshots of the home directory

The use of ZFS snapshots as backup copies of a home directory is discussed elsewhere. In general, the editing of .bashrc and .bash_profile is discouraged, especially for the alteration of the PATH and LD_LIBRARY_PATH environment variables.

Workgroup directories

Each workgroup that purchases capacity in the cluster receives a workgroup directory with a quota in proportion to its level of investment in the cluster: the more compute capacity purchased, the more space granted. Workgroup directories are mounted at the path /work/<workgroup-id> on all nodes in the cluster.

Once you've started a shell in a workgroup using the workgroup -g <workgroup-id> command the WORKDIR environment variable contains the path to the workgroup directory. This allows you to reference it in commands like ls -l ${WORKDIR}/users.

Adding the -c flag to the workgroup command automatically starts the workgroup shell in $WORKDIR.

The typical layout of a workgroup directory often includes:

SubdirectoryDescription
/work/<workgroup-id>/.zfs/snapshot/Always present, contains historical snapshots of the the workgroup directory
/work/<workgroup-id>/sw/A directory to hold software used by multiple members of the workgroup
/work/<workgroup-id>/sw/valet/Directory for VALET package definitions (automatically added to VALET_PATH by UD Bash login scripts)
/work/<workgroup-id>/users/Directory to contain per-user storage rather than having them exist directly under /work/<workgroup-id>
/work/<workgroup-id>/projects/Directory to contain per-project storage rather than having them exist directly under /work/<workgroup-id>

None of these directories are mandatory, but they do tend to make management of a workgroup's resources easier. In particular, the fact that /work/<workgroup-id>/sw/valet will be automatically added to the VALET search path means workgroup users do not need to alter VALET_PATH manually, in their .bashrc/.bash_profile, or in their job scripts.

The use of ZFS snapshots as backup copies of a workgroup directory is discussed elsewhere.

Lustre scratch

The /lustre/scratch file system is a high-speed parallel file system accessible from all nodes in the cluster. Users/groups are free to create their own top-level directories under /lustre/scratch and are responsible for:

The total capacity can be checked using the lfs df command:

$ lfs df
UUID                   1K-blocks        Used   Available Use% Mounted on
scratch-MDT0000_UUID  2989410560   977815040  2011593472  33% /lustre/scratch[MDT:0]
scratch-OST0000_UUID 51093819392 12543377408 38549933056  25% /lustre/scratch[OST:0]
scratch-OST0001_UUID 51093630976 11691081728 39402001408  23% /lustre/scratch[OST:1]
scratch-OST0002_UUID 51093489664 12198661120 38894347264  24% /lustre/scratch[OST:2]
scratch-OST0003_UUID 51093485568 12000202752 39092807680  23% /lustre/scratch[OST:3]

filesystem_summary:  204374425600 48433323008 155939089408  24% /lustre/scratch

Note that this command displays both aggregate capacity and the capacity of each OST and MDT (MetaData Target) component of the file system. Users can determine their current occupied Lustre scratch capacity:

$ lfs quota -u $(id -u) /lustre/scratch
Disk quotas for usr 1001 (uid 1001):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
/lustre/scratch  313298       0       0       -      78       0       0       -

Likewise, capacity associated explicitly with a workgroup can be checked:

$ lfs quota -g $(id -g) /lustre/scratch
Disk quotas for grp 1001 (gid 1001):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
/lustre/scratch  267784       0       0       -      16       0       0       -

UD IT staff reserve the right to perform emergency removal of data from /lustre/scratch if occupied capacity reaches unsafe levels. Periodic automated cleanup policies may become necessary if such levels persist.