Skip to content

Specifications of Job submission

The following is a comparison of submission options between PBS and SBATCH lines. It is only for help on the transition between the two systems.

Important differences between SLURM and PBS

Please be careful on the specifications --ntask= (-n) and --cpus-per-task= (-c) in SLURM since they are not in PBS specifications (and there is no CPUs per node or ppn in SLURM). The two requests are for software specifications in Parallel Computing. The number of tasks (-n) is the number of parallel processes in distributed memory (such as MPI model). The number of CPUs per task (-c) is for the number of threads in shared memory (such as OpenMP model). If you would like to specify how many different nodes (in hardware), please use --nodes= (-N) in SLURM job script or on sbatch command.

Torque SLURM Description for
Torque
Torque Example SLURM Example
#PBS #SBATCH Headof each line
-A -A, --account= This option tells TORQUE to use the Account (not Username) credential specified. #PBS -A mybuyin #SBATCH -A mybuyin
#SBATCH --
account=mybuyin
-a -begin= This option tells PBS to run a job at the given time. #PBS -a 0615 #SBATCH --begin=06:15
-e
-o
-e, --error= pattern>
-o, --output= pattern>
Need a file name pattern. Can not be a directory name. For the details of valid filename pattern, check the manual of sbatch by "man sbatch" and look into filename pattern section.
The location for Output_Path and Error_Path attributes as demonstrated below. Note that you only need to use one if you use the -j option. #PBS -e ~/ErrorFile #SBATCH -e ~/ErrorFile_%j_%u

By default both standard output and standard error are directed to the same file.
qsub -I salloc
srun --pty /bin/bash
Declares that the job is to be run "interactively". qsub -I
qsub -I -X
srun --pty /bin/bash
salloc --x11
-j Using an eo option will combine STDOUT and STDERR in the file specified in Error_Path; oe will combine them in Output_Path. #PBS -j oe By default both standard output and standard error are directed to the same file.
-l -N, --nodes=\-n, --ntasks=\--ntasks-per-node=
-c, --cpus-per-task=\
--gres=\
-t, --time=\
Separate them with ","
nodes=#; Number and/or type of nodes to be reserved.
ppn=# specify the number of processors per node requested. Defaults to 1. gpus=# specify the number of gpus to use.
walltime= the total run time in the form: HH:MM:SS or DD:HH:MM:SS
mem= Maximum amount of memory needed by a job.
feature= the name of the type of compute noded related to our cluster configuration
file= Maximum amount of local disk space needed by a job.
#PBS -l walltime=01:00:00
#PBS -l mem=8gb
#PBS -l feature=intel14|intel16
#PBS -l file=40GB
(See more explanation in the start of this page)
#SBATCH -n 4 -c 1 --gres=gpu:2
#SBATCH
--time=01:00:00
#SBATCH --mem=2G
#SBATCH -C NOAUTO:intel14|intel16
Constraints using "|" must be prepended with 'NOAUTO:'. Click here for more information.
#SBATCH --tmp=40G
-M --mail-user=\ Emails account(s) to notify once a job changes states as specified by -m #PBS -M usr@msu.edu #SBATCH --mail-user=usr@msu.edu
-m --mail-type=\ a- sends mail when job is aborted by batch system
b- sends mail when begins execution,
e- sends mail when job ends, n- does not send mail
#PBS -m abe #SBATCH --mail-type=FAIL,BEGIN,END
NONE - does not send mail
-N -J, --job-name=\ Names the job #PBS -N MySuperComputing #SBATCH -J MySuperComputing
-t -a, --array= Submits a Array Job with n identical tasks. Each task has the same \$PBS_JOBID but different \$PBS_ARRAYID variables. #PBS -t 5
#PBS -t 3-10
#SBATCH -a 5
#SBATCH --array=3-10
-V --export=<environment variables [ALL] NONE> Passes all current environment variables to the job. #PBS -V
-v --export=<environment variables [ALL] NONE> Defines additional environment variables for the job. #PBS -v ev1=ph3,ev2=50
-W -L, --licenses=\ Special Generic Resources such as software licenses can be requested using the -W option. This is most commonly used with matlab (see Matlab Licenses for more information.) #PBS -W gres:MATLAB #SBATCH -L matlab@27000@lm-01.i