Slurm Partitions

The slurm scheduler uses partitions as the method to allocate resources to jobs. The FRCE cluster has several partitions defined, with the major ones summarized below.

Partition NameWall Time LimitMax Cores Per UserOther Limits
short30 minutes170 
norm5 days575 
gpuunlimited35024 P100 GPUs
48 V100 GPUs
4 A100 GPUs
unlimitedunlimited432 
largememunlimited192 
nci-dragenunlimited641 Illumina Dragen V3 Server

Two other partitions will be displayed in the sinfo output. The dragen partition has several servers for the exclusive use by the CCR Sequencing Facility while csbdevel provides a priority channel for a set of programmers who make frequent changes to their code.

A submit script that requests a particular partition might read

#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --ntasks=8
#SBATCH --time=00:05:00
#SBATCH --partition=short
printf "You are now executing on node %s using {} CPU cores.\n" ${SLURMD_NODENAME} ${SLURM_NPROCS}

Please note the first line, #!/bin/bash in job script is important. Otherwise, SLURM will not let the script be submitted.

To use GPU in the cluster, change the script headers to

#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --ntasks=8
#SBATCH --time=00:05:00
#SBATCH --partition=gpu
#SBATCH --gres=gpu:p100:1
printf "%s GPU(s) have been allocated with the identification string\n\t%s\n" \
    ${SLURM_GPUS_ON_NODE} "$(nvidia-smi -L)"