Differences
This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
caviness:slurm-manual:intro [2019/08/08 18:08] frey |
caviness:slurm-manual:intro [2019/08/23 19:13] (current) frey [What are resources?] |
||
|---|---|---|---|
| Line 39: | Line 39: | ||
| * System memory (RAM) | * System memory (RAM) | ||
| * Coprocessors (nVidia GPUs) | * Coprocessors (nVidia GPUs) | ||
| - | * Time | + | * Wall time((wall time = elapsed real time)) |
| + | |||
| + | Though default values exist for each, you are encouraged to always make explicit the levels required by a job. In general, requesting more resources than your job can effectively (or efficiently) use: | ||
| + | - can delay start of your job (e.g. it takes longer to coordinate 10 nodes' being free versus a single node) | ||
| + | - may decrease your workgroup's relative job priority versus other workgroups (further delaying future jobs) | ||
| + | |||
| + | ==== Queues and partitions ==== | ||
| + | |||
| + | With other job schedulers, a //queue// is an ordered list of work to be performed. There are one or more queues and jobs are submitted to specific queue(s). Each queue has a set of hardware resources associated with it on which the queue can execute jobs. | ||
| + | |||
| + | Slurm starts from the other end and uses a //partition// to represent a set of hardware resources on which jobs can execute. A single queue contains all jobs, and the partition selected for each job constrains which hardware resources can be used. | ||
| - | Though default values exist for each, you are encouraged to always make explicit the levels required by a job. | ||