Migrating to SLURM from TORQUE
While the HPCC uses the SLURM resource manager, users may be familiar with the TORQUE resource manager from running jobs on other systems.
SLURM handles resource requests slightly differently than TORQUE. TORQUE frames requests in terms of CPUs; for instance, #PBS -l ppn=<count>
specifies the desired number of CPUs (processors) per node. On the other hand, SLURM frames requests in terms of tasks; e.g., #SLURM --ntasks-per-node=<count>
. A task may then have multiple CPUs assigned to it for the purposes of threading: #SBATCH --cpus-per-task=<count>
.
We recommend reviewing our pages on writing and submitting job scripts, the list of SLURM resource specifications, and our example SLURM scripts to help with the transition to SLURM. Users may also benefit from this side by side comparison between TORQUE and SLURM options, environment variables, and commands.