List of Job Specifications
The following are lists of basic #SBATCH
specifications, broken up by purpose.
To see the complete list of options, please refer to the SLURM
sbatch
command page, but
be aware that not all options are implemented for the HPCC.
Resource Requests
These options specify the computing resources needed for your job. Not all options need to be specified. See our example batch scripts.
Option | Description | Examples |
---|---|---|
-A ,--account=<account> |
This option tells SLURM to use the specified buy-in account. Unless you are an authorized user of the account, your job will not run. | #SBATCH -A <account> |
-C ,--constraint=<list> |
Request node feature. May be specified with symbol & for and, | for or, etc.Constraints using |
#SBATCH -C NOAUTO:intel16|intel14 |
-c ,--cpus-per-task=<ncpus> |
Require <ncpus> number of processors per task. Without this option, the controller will just try to allocate one processor per task.Due to changes with |
#SBATCH -c 3 (3 cores per node) |
-G ,--gpus=[<type>:]<number> |
Specify the total number of GPUs required for the job. An optional GPU type specification can be supplied. Valid GPU types are k20 , k80 , v100 and a100 . Note that type is optional, but the number of GPUs is necessary. The allocation has to contain at least one GPU per node. |
#SBATCH --gpus=k80:2 (request 2 k80 GPUs for entire job)#SBATCH --gpus=2 (request 2 GPUs for entire job) |
--gpus-per-node=[<type>:]<number> |
Specify the number of GPUs required for the job on each node included in the job's resource allocation. GPUs are specified in the same format as --gpus . |
#SBATCH --gpus-per-node=v100:8 (request 8 v100 GPUs for each node requested by job)#SBATCH --gpus-per-node=8 (request 8 GPUs for each node requested by job) |
--gpus-per-task=[<type>:]<number> |
Specify the number of GPUs required for the job on each task to be spawned in the job's resource allocation. GPUs are specified in the same format as --gpus .This option requires an explicit task count, e.g. |
#SBATCH --gpus-per-task=k80:2 (request 2 k80 GPUs for each task requested by job)#SBATCH --gpus-per-task=2 (request 2 GPUs for each task requested by job) |
--mem=<size[units]> |
Specify the real memory required per node. Default units are megabytes. Different units can be specified using the suffix [K|M|G|T]. Mutually exclusive with |
#SBATCH --mem=2G |
--mem-per-cpu=<size[units]> |
Memory required per allocated CPU. Default units are megabytes. Different units can be specified using the suffix [K|M|G|T]. Mutually exclusive with |
#SBATCH --mem-per-cpu=2G |
--mem-per-gpu=<size[units]> |
Memory required per allocated GPU. Default units are megabytes. Different units can be specified using the suffix [K|M|G|T]. Mutually exclusive with |
#SBATCH --mem-per-gpu=2G |
-N ,--nodes=<minnodes[-maxnodes]> |
Request that a minimum of minnodes nodes be allocated to this job. A maximum node count may also be specified with maxnodes . If only one number is specified, this is used as both the minimum and maximum node count.The job will be allocated as many nodes as possible within the range specified and without delaying the initiation of the job. Note that the environment variable |
#SBATCH --nodes=2-4 (Request 2 to 4 different nodes) |
-n ,--ntasks=<number> |
This option advises the Slurm controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. Actual tasks must be launched by application within the job script.The default is one task per node, but note that the |
#SBATCH -n 4 (All tasks could be in 1 to 4 different nodes) |
--ntasks-per-node=<ntasks> |
Specify the number of tasks to run per node. Meant to be used with the --nodes option.This is related to |
#SBATCH --ntasks 4 |
-q --qos=<qos> |
Request a quality of service. The HPCC allows you to specify the scavenger QOS. | #SBATCH --qos=scavenger |
-t ,--time=<time> |
Set a limit on the total run time of the job allocation. The total run time in the form: HH:MM:SS or DD-HH:MM:SS | #SBATCH -t 00:20:00 |
-w ,--nodelist=<node name list> |
Request a specific list of your buy-in nodes. The job will contain all of these hosts and possibly additional hosts as needed to satisfy resource requirements. The list may be specified as a comma-separated list of hosts, a range of hosts, or a filename. The host list will be assumed to be a filename if it contains a |
#SBATCH -w host1,host2,host3,... #SBATCH -w host[1-5,7,...] #SBATCH -w /mnt/home/userid/nodelist |
-x ,--exclude=<node name list> |
Explicitly exclude certain nodes from the resources granted to the job. The syntax follows that of --nodelist . |
#SBATCH -x host[1-5] |
Job Environment
As opposed to the computing resources needed by a job, these parameters impact the computing environment in which a SLURM job is running.
Option | Description | Examples |
---|---|---|
-a ,--array=<indexes> |
Submit a job array with multiple jobs to be executed; that is, a group of jobs requiring the same set of resourcs. Each job has the same job ID ($SLURM_JOB_ID ) but different array ID ($SLURM_ARRAY_TASK_ID ). The indexes passed to the argument identify what array ID values should be used.The indicies can be a mix of lists or ranges. A |
#SBATCH -a 0-15 ,#SBATCH -a 0,6,16-32 ,#SBATCH -a 0-15:4 (same as #SBATCH –a 0,4,8,12 ),#SBATCH --array=0-15%4 (max 4 jobs running simultaneously) |
-D ,--chdir=<directory> |
Set the working directory of the batch script to <directory> before it is executed. The path can be specified as full path or relative path to the directory where the command is executed. |
#SBATCH -D /mnt/scratch/username |
--export=[ALL,]<environment variables> --export=NONE |
Identify which environment variables are propagated to the launched application. By default, all are propagated. Multiple environment variable names should be comma-separated. If NONE , SLURM will attempt to load the user's environment on the node the job is being executed. |
#SBATCH --export=ALL,EDITOR=/bin/emacs #SBATCH --export=NONE |
-J ,--job-name=<jobname> |
Specify a name for the job allocation. | #SBATCH -J MySuperComputing |
-L ,--licenses=<license> |
Specification of licenses (or other resources available on all nodes of the cluster) which must be allocated to this job. | #SBATCH -L comsol@1718@lm-01.i |
Job I/O and Notifications
When running interactively, users have access to
standard input, output, and error via the command line. When running as
a batch job, SLURM handles each of these streams as files.
By default, standard output and error are combined into the same output
file.
Option | Description | Examples |
---|---|---|
-e ,--error=<filename> |
Instruct SLURM to connect the batch script's standard error directly to the file name specified. By default both standard output and standard error are directed to the same file. See |
#SBATCH -e /home/username/myerrorfile |
-i ,--input=<filename pattern> |
Instruct Slurm to connect the batch script's standard input directly to the file name specified in the "filename pattern". | #SBATCH -i /mnt/home/username/myinputfile |
-o ,--output=<filename pattern> |
Instruct Slurm to connect the batch script's standard output directly to the file name specified in the <filename pattern> .The default file name is |
#SBATCH -o /home/username/output-file |
--mail-type=<type> |
Notify user by email when certain event types occur. Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL, REQUEUE, and STAGE_OUT), STAGE_OUT (burst buffer stage out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send emails for each array task). | #SBATCH --mail-type=BEGIN,END |
--mail-user=<user> |
User to receive email notification of state changes as defined by --mail-type . The default value is the submitting user. |
#SBATCH --mail-user=user@msu.edu |
-v ,--verbose |
Increase the verbosity of sbatch's informational messages. Multiple v will further increase sbatch's verbosity. By default only errors will be displayed. |
Start Conditions
Option | Description | Examples |
---|---|---|
--begin=<time> |
Submit the batch script to the Slurm controller immediately, like normal, but tell the controller to defer the allocation of the job until the specified time. Time may be of the form HH:MM:SS to run a job at a specific time of day (seconds are optional). | #SBATCH --begin=16:00 #SBATCH --begin=now+1hour (default unit is seconds) |
-d ,--dependency=<dependency_list> |
Defer the start of this job until the specified dependencies have been satisfied completed. <dependency_list> can have many forms:- |
#SBATCH -d after:<JobID1>:<JobID2>,afterok:<JobID3> |
-H ,--hold |
Specify the job is to be submitted in a held state (priority of zero). A held job can now be released using scontrol to reset its priority (e.g. scontrol release <job_id> ). |
#SBATCH -H |
--no-requeue |
Request that a job not be requeued under any circumstances. Jobs are requeued by default if a node they are running on fails. This options may be useful for jobs in the scavenger queue that will not run properly after having run partially and failing. | #SBATCH --no-requeue |
Overriding the Job Script
Optionally, any job specification (by #SBATCH line) can also be requested by sbatch
command line with an equivalent option. For instance, the #SBATCH --nodes=1-5 line could be removed from the job script, and instead be specified from the command line:
1 |
|
Command line specifications take precedence over those in the job script.