Slurm partitions

Partitions#


The current Grex system that has contributed nodes, large memory nodes and contributed GPU nodes is (and getting more and more) heterogeneous. With SLURM, as a scheduler, this requires partitioning: a “partition” is a set of compute nodes, grouped by a characteristic, usually by the kind of hardware the nodes have, and sometimes by who “owns” the hardware as well.

There is no fully automatic selection of partitions, other than the default skylake for most of the users, and compute for the short jobs. For the contributors’ group members, the default partition will be their contributed nodes. Thus, in many cases users have to specify the partition manually when submitting their jobs!

Currently, the following partitions are available on Grex:

General purpose CPU partitions#


PartitionNodesCPUs/NodeCPUsMem/NodeNotes
skylake4252218496 GbCascadeLakeRefresh
largemem1240480384 GbCascadeLake
compute31612379248 GbSSE4.2
compute4208032 GbAvx
test11836512 Gb-
-374-6536--

General purpose GPU partitions#


PartitionNodesGPU typeCPUs/NodeMem/NodeNotes
gpu24 - V100/32 GB32187 GbAVX512

Contributed CPU partitions#


PartitionNodesCPU typeCPUs/NodeMem/NodeNotes
mcordcpu 15** AMD EPYC 9634 84-Core**1681500 Gb-

Contributed GPU partitions#


PartitionNodesGPU typeCPUs/NodeMem/NodeNotes
stamps 234 - V100/16GB32187 GbAVX512
livi 31HGX-2 16xGPU V100/32GB481500 GbNVSwitch server
agro 42AMD Zen24250 GbAMD
mcordgpu 55AMD EPYC 96341681500 Gb-

Preemptible partitions#


The following preemptible partition are set for general use of the contributed nodes:

PartitionContributed by
stamps-bProf. R. Stamps
livi-bProf. L. Livi
agro-bFaculty of Agriculture
mcordgpu-bFaculty of Agriculture

The former five partitions (skylake, compute, largemem, test and gpu) are generally accessible. The next three partitions (stamps, livi and agro) are open only to the contributor’s groups.

On the contributed partitions, the owners’ group has preferential access. However, users belonging to other groups can submit jobs to one of the preemptible partitions (ending with -b) to run on the contributed hardware as long as it is unused, on the condition that their jobs can be preempted (that is, killed) should owners’ jobs need the hardware. There is a minimum runtime guaranteed to preemptible jobs, which is as of now 1 hour. The maximum wall time for the preemptible partition is set per partition (and can be seen in the output of the sinfo command). To have a global overview of all partitions on Grex, run the custom script partition-list from your terminal.

On the special partition test, oversubscription is enabled in SLURM, to facilitate better turnaround of interactive jobs.

Jobs cannot run on several partitions at the same time; but it is possible to specify more than one partition, like in --partition=compute,skylake, so that the job will be directed by the scheduler to the first partition available.

Jobs will be rejected by the SLURM scheduler if partition’s hardware and requested resources do not match (that is, asking for GPUs on compute, largemem or skylake partitions is not possible). So, in some cases, explicitly adding --partition= flag to SLURM job submission is needed.

Jobs that require stamps-b or gpu partitions have to use GPUs, otherwise they will be rejected; this is to prevent bogging up the precious GPU nodes with CPU-only jobs!


  1. mcordcpu CPU nodes contributed by Prof. Marcos Cordeiro (Department of Agriculture). ↩︎

  2. stamps: GPU nodes contributed by Prof. R. Stamps ↩︎

  3. livi: GPU node contributed by Prof. L. Livi ↩︎

  4. agro: GPU node contributed by Faculty of Agriculture ↩︎

  5. mcordgpu GPU nodes contributed by Prof. Marcos Cordeiro (Department of Agriculture). ↩︎