Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
cluster:schumann.coastal [2014/02/18 16:39]
kratos
— (current)
Line 1: Line 1:
-~~NOCACHE~~ 
-====== schumann.coastal.udel.edu ====== 
- 
-<php> 
-include_once('​clusterdb.php'​);​ 
- 
-CDBOpen(); 
- 
-if ( ($clusterID = CDBClusterIDForClusterHost('​schumann.coastal'​)) !== FALSE ) { 
-  if ( $vendorTag = CDBClusterVendorTagForClusterID($clusterID) ) { 
-    printf("<​b>​Vendor:</​b>&​nbsp;&​nbsp;​%s<​br>​\n",​$vendorTag);​ 
-  } 
-  if ( $udPropTag = CDBClusterUDPropertyTagForClusterID($clusterID) ) { 
-    printf("<​b>​UD Property Tag:</​b>&​nbsp;&​nbsp;​%s<​br>​\n",​$udPropTag);​ 
-  } 
-  echo "<​br>";​ 
-  CDBListNodes($clusterID);​ 
-  echo "<​br>​\n<​b>​Inventory:</​b><​br>​\n<​table border=\"​0\"><​tr valign=\"​bottom\"><​td>";​ 
-  CDBListAssets($clusterID);​ 
-  echo "</​td><​td>";​ 
-  CDBAssetsLegend();​ 
-  echo "</​td></​tr></​table>​\n\n";​ 
- 
-  if ( CDBClusterHasWebInterface($clusterID) ) { 
-    printf("<​a href=\"​http://​schumann.coastal.udel.edu/​\">​Cluster status</​a>​ web pages.<​br>​\n"​);​ 
-  } 
-} 
-</​php>​ 
- 
----- 
- 
-===== Filesystems ===== 
- 
-The schumann.coastal cluster has a massive 10 TB fibre channel RAID5 disk array attached to it. The array is partitioned into two filesystems that are then mounted on the head node of the cluster: 
- 
-  - ''/​home'':​ A 2.5 TB volume used for user home directories. 
-  - ''/​monolith'':​ A 7.5 TB volume for a large data store required by the owner 
- 
-The ''/​monolith''​ filesystem is only visible on the head node; none of the compute nodes have access to it. However, the ''/​home''​ filesystem is NFS-shared to the compute nodes across the Ammasso RDMA Ethernet interconnect. This network fabric runs at gigabit speed, with enhanced, off-processor data streaming to the adapters. 
- 
-This may seem odd at first glance: typically one would want to reserve the RDMA network for MPI-heavy streams. At the moment this cluster will probably see heavy use of Gaussian, constrained to single nodes, so the primary bottleneck will be the writing/​reading of checkpoint files in the users' home directories. Sharing ''/​home''​ over the Ammasso interfaces makes sense in this situation. 
- 
-===== Gaussian ===== 
- 
-The cluster has Gaussian '03 revision C02 installed on it. Gaussian jobs should be submitted to the compute nodes by means of gqueue, which (behind the scenes) creates GridEngine queue scripts and submits them for you, handling all of the sticky details: 
- 
-<​code>​ 
-% ls 
-bin  H2O.com 
-% gqueue --nproc=4 H2O.com 
-Your job 5 ("​H2O.gqs_XX2eKh6z"​) has been submitted. 
-% ls 
-bin  H2O.chk ​ H2O.com ​ H2O.log 
-</​code>​ 
- 
-When selecting the number of processors you'd like for a job, keep in mind that as things are currently configured, Gaussian will only run on a single node, implying anywhere from 1 to 4 processors per job. 
  
  • cluster/schumann.coastal.1392741560.txt.gz
  • Last modified: 2014/02/18 16:39
  • by kratos