This is an old revision of the document!


schumann.coastal.udel.edu

Vendor:  TeamHPC #79116
UD Property Tag:  143989

 Operating SystemArchitectureRAM
head nodeCentOS release 4.3 (Final)1 x AMD Opteron 275 (2 cores) @ 2200 MHz2000 MB
storage subsystem nodeEonStor Infortrend Appliance1 x IBM PowerPC 750FX @ 800 MHz512 MB
(15) compute nodesCentOS release 4.3 (Final)2 x AMD Opteron 275 (2 cores) @ 2200 MHz4000 MB

Inventory:
Item
Gaussian 2003 C.02
Matlab R2006a
G77 Fortran 3.4.5
GNU C/C++/ObjC 3.4.5
Intel C++ 9.0
Intel Fortran 9.0
Portland Group Compiler Suite 6.1.2
AMD Core Math Library 3.1.0
Intel Math Kernel Library 8.0.1
GridEngine 6.0
 
 Software Development
 
 System Software
 
 Code Library
 
 End-User Application
Cluster status web pages.


The schumann.coastal cluster has a massive 10 TB fibre channel RAID5 disk array attached to it. The array is partitioned into two filesystems that are then mounted on the head node of the cluster:

  1. /home: A 2.5 TB volume used for user home directories.
  2. /monolith: A 7.5 TB volume for a large data store required by the owner

The /monolith filesystem is only visible on the head node; none of the compute nodes have access to it. However, the /home filesystem is NFS-shared to the compute nodes across the Ammasso RDMA Ethernet interconnect. This network fabric runs at gigabit speed, with enhanced, off-processor data streaming to the adapters.

This may seem odd at first glance: typically one would want to reserve the RDMA network for MPI-heavy streams. At the moment this cluster will probably see heavy use of Gaussian, constrained to single nodes, so the primary bottleneck will be the writing/reading of checkpoint files in the users' home directories. Sharing /home over the Ammasso interfaces makes sense in this situation.

The cluster has Gaussian '03 revision C02 installed on it. Gaussian jobs should be submitted to the compute nodes by means of gqueue, which (behind the scenes) creates GridEngine queue scripts and submits them for you, handling all of the sticky details:

% ls
bin  H2O.com
% gqueue --nproc=4 H2O.com
Your job 5 ("H2O.gqs_XX2eKh6z") has been submitted.
% ls
bin  H2O.chk  H2O.com  H2O.log

When selecting the number of processors you'd like for a job, keep in mind that as things are currently configured, Gaussian will only run on a single node, implying anywhere from 1 to 4 processors per job.

  • cluster/schumann.coastal.1168632145.txt.gz
  • Last modified: 2014/02/18 16:39
  • (external edit)