Slurm partitions

Partitions#


The current Grex system that has contributed nodes, large memory nodes and contributed GPU nodes is (and getting more and more) heterogeneous. With SLURM, as a scheduler, this requires partitioning: a “partition” is a set of compute nodes, grouped by a characteristic, usually by the kind of hardware the nodes have, and sometimes by who “owns” the hardware as well.

There is no fully automatic selection of partitions, other than the default skylake for most of the users for the short jobs. For the contributors’ group members, the default partition will be their contributed nodes. Thus, in many cases users have to specify the partition manually when submitting their jobs!

On the special partition test, oversubscription is enabled in SLURM, to facilitate better turnaround of interactive jobs.

Jobs cannot run on several partitions at the same time; but it is possible to specify more than one partition, like in --partition=skylake,largemem, so that the job will be directed by the scheduler to the first partition available.

Jobs will be rejected by the SLURM scheduler if partition’s hardware and requested resources do not match (that is, asking for GPUs on compute, largemem or skylake partitions is not possible). So, in some cases, explicitly adding --partition= flag to SLURM job submission is needed.

Jobs that require stamps-b or gpu or other GPU-containing partitions, have to use GPUs (with a corresponding TRES flag like --gpus=), otherwise they will be rejected; this is to prevent bogging up the expensive GPU nodes with CPU-only jobs!

Currently, the following partitions are available on Grex:

General purpose CPU partitions#


PartitionNodesCPUs/NodeCPUsMem/NodeNotes
skylake4252218496 GbCascadeLakeRefresh
largemem1240480384 GbCascadeLake
genoa271925184750 GbAMD EPYC 9654
genlm31925761500 GbAMD EPYC 9654
test11836512 Gb-

General purpose GPU partitions#


PartitionNodesGPU typeCPUs/NodeMem/NodeNotes
gpu24 - V100/32 GB32187 GbAVX512

Contributed CPU partitions#


PartitionNodesCPU typeCPUs/NodeMem/NodeNotes
mcordcpu 15AMD EPYC 9634 84-Core1681500 Gb-

Contributed GPU partitions#


PartitionNodesGPU typeCPUs/NodeMem/NodeNotes
stamps 234 - V100/16GB32187 GbAVX512
livi 31HGX-2 16xGPU V100/32GB481500 GbNVSwitch server
agro 42AMD Zen24250 GbAMD
mcordgpu 55AMD EPYC 96341681500 Gb-

Preemptible partitions#


The following preemptible partition are set for general use of the contributed nodes:

PartitionContributed by
stamps-bProf. R. Stamps
livi-bProf. L. Livi
agro-bFaculty of Agriculture
mcordcpu-bProf M. Cunha Cordeiro
mcordgpu-bProf M. Conha Cordeiro

The former following partitions (skylake, largemem, test and gpu) are generally accessible. The other partitions (stamps, livi, agro, mcordcpu and mcordgpu) are open only to the contributor’s groups.

On the contributed partitions, the owners’ group has preferential access. However, users belonging to other groups can submit jobs to one of the preemptible partitions (ending with -b) to run on the contributed hardware as long as it is unused, on the condition that their jobs can be preempted (that is, killed) should owners’ jobs need the hardware. There is a minimum runtime guaranteed to preemptible jobs, which is as of now 1 hour. The maximum wall time for the preemptible partition is set per partition (and can be seen in the output of the sinfo command). To have a global overview of all partitions on Grex, run the custom script partition-list from your terminal.

Note that the owners’ and corresponding preeemptible partitions do overlap! This means, that owners’ group should not submit their jobs to both of the contributed and the corresponding preemptible partitions, otherwise their jobs may preeempt their other jobs!


  1. mcordcpu CPU nodes contributed by Prof. Marcos Cordeiro (Department of Agriculture). ↩︎

  2. stamps: GPU nodes contributed by Prof. R. Stamps ↩︎

  3. livi: GPU node contributed by Prof. L. Livi ↩︎

  4. agro: GPU node contributed by Faculty of Agriculture ↩︎

  5. mcordgpu GPU nodes contributed by Prof. Marcos Cordeiro (Department of Agriculture). ↩︎