Partitions#
The current Grex system that has contributed nodes, large memory nodes and contributed GPU nodes is (and getting more and more) heterogeneous. With SLURM, as a scheduler, this requires partitioning: a “partition” is a set of compute nodes, grouped by a characteristic, usually by the kind of hardware the nodes have, and sometimes by who “owns” the hardware as well.
There is no fully automatic selection of partitions, other than the default skylake for most of the users for the short jobs. For the contributors’ group members, the default partition will be their contributed nodes. Thus, in many cases users have to specify the partition manually when submitting their jobs!
On the special partition test, oversubscription is enabled in SLURM, to facilitate better turnaround of interactive jobs.
Jobs cannot run on several partitions at the same time; but it is possible to specify more than one partition, like in --partition=skylake,largemem, so that the job will be directed by the scheduler to the first partition available.
Jobs will be rejected by the SLURM scheduler if partition’s hardware and requested resources do not match (that is, asking for GPUs on compute, largemem or skylake partitions is not possible). So, in some cases, explicitly adding --partition= flag to SLURM job submission is needed.
Jobs that require stamps-b or gpu or other GPU-containing partitions, have to use GPUs (with a corresponding TRES flag like --gpus=), otherwise they will be rejected; this is to prevent bogging up the expensive GPU nodes with CPU-only jobs!
Currently, the following partitions are available on Grex:
General purpose CPU partitions#
Partition | Nodes | CPUs/Node | CPUs | Mem/Node | Notes |
---|---|---|---|---|---|
skylake | 42 | 52 | 2184 | 96 Gb | CascadeLakeRefresh |
largemem | 12 | 40 | 480 | 384 Gb | CascadeLake |
genoa | 27 | 192 | 5184 | 750 Gb | AMD EPYC 9654 |
genlm | 3 | 192 | 576 | 1500 Gb | AMD EPYC 9654 |
test | 1 | 18 | 36 | 512 Gb | - |
General purpose GPU partitions#
Partition | Nodes | GPU type | CPUs/Node | Mem/Node | Notes |
---|---|---|---|---|---|
gpu | 2 | 4 - V100/32 GB | 32 | 187 Gb | AVX512 |
Contributed CPU partitions#
Partition | Nodes | CPU type | CPUs/Node | Mem/Node | Notes |
---|---|---|---|---|---|
mcordcpu 1 | 5 | AMD EPYC 9634 84-Core | 168 | 1500 Gb | - |
Contributed GPU partitions#
Partition | Nodes | GPU type | CPUs/Node | Mem/Node | Notes |
---|---|---|---|---|---|
stamps 2 | 3 | 4 - V100/16GB | 32 | 187 Gb | AVX512 |
livi 3 | 1 | HGX-2 16xGPU V100/32GB | 48 | 1500 Gb | NVSwitch server |
agro 4 | 2 | AMD Zen | 24 | 250 Gb | AMD |
mcordgpu 5 | 5 | AMD EPYC 9634 | 168 | 1500 Gb | - |
Preemptible partitions#
The following preemptible partition are set for general use of the contributed nodes:
Partition | Contributed by |
---|---|
stamps-b | Prof. R. Stamps |
livi-b | Prof. L. Livi |
agro-b | Faculty of Agriculture |
mcordcpu-b | Prof M. Cunha Cordeiro |
mcordgpu-b | Prof M. Conha Cordeiro |
The former following partitions (skylake, largemem, test and gpu) are generally accessible. The other partitions (stamps, livi, agro, mcordcpu and mcordgpu) are open only to the contributor’s groups.
On the contributed partitions, the owners’ group has preferential access. However, users belonging to other groups can submit jobs to one of the preemptible partitions (ending with -b) to run on the contributed hardware as long as it is unused, on the condition that their jobs can be preempted (that is, killed) should owners’ jobs need the hardware. There is a minimum runtime guaranteed to preemptible jobs, which is as of now 1 hour. The maximum wall time for the preemptible partition is set per partition (and can be seen in the output of the sinfo command). To have a global overview of all partitions on Grex, run the custom script partition-list from your terminal.
Note that the owners’ and corresponding preeemptible partitions do overlap! This means, that owners’ group should not submit their jobs to both of the contributed and the corresponding preemptible partitions, otherwise their jobs may preeempt their other jobs!
mcordcpu CPU nodes contributed by Prof. Marcos Cordeiro (Department of Agriculture). ↩︎
stamps: GPU nodes contributed by Prof. R. Stamps ↩︎
livi: GPU node contributed by Prof. L. Livi ↩︎
agro: GPU node contributed by Faculty of Agriculture ↩︎
mcordgpu GPU nodes contributed by Prof. Marcos Cordeiro (Department of Agriculture). ↩︎