Skip to content

Gaussian Job Script

Here is a simple job script g16.sb for running Gaussian job g16.com:

g16.sb

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/bin/bash --login

#SBATCH –-job-name=GaussianJob
#SBATCH –-ntasks=1
#SBATCH –-cpus-per-task=4
#SBATCH --mem=7G
#SBATCH –-time=00:10:00 

echo "This script is from ICER's Gaussian example"

InputFile=g16.com
OutputFile=g16.log

module load Gaussian/g16 powertools
# GAUSS_SCRDIR=<your preferred Gaussian scratch space>
# mkdir -p ${GAUSS_SCRDIR}

g16 < ${InputFile} > ${OutputFile}

### write job information to SLURM output file
scontrol show job $SLURM_JOB_ID 

# Print out resource usage  
js -j $SLURM_JOB_ID           ### powetools command

where the Gaussian input file g16.com can be found from the previous section Running Gaussian by Command Lines.

For the resource request (#SBATCH lines) above, since Gaussian can only run in parallel with shared memory in HPCC system, only 1 task (with 1 node) is requested in line 2. The number of CPUs requested in line 3 is the same as the setting of "%NProcShared" (=4) in the Gaussian input file. The memory request in line 4 should be larger than the setting of "%Mem" (=5GB) in the Gaussian input file in case the job runs out of memory. Please also make sure the walltime request in line 5 is longer enough to finish the job.

In the command line, you need to make sure Gaussian/g16 is loaded as in line 10. If you would like to use scratch directory other than /mnt/scratch/$USER for the Gaussian scratch files, you could set up a different one with line 11 and 12. The calculation of the Gaussian job is executed in line 14 with input file g16.com and output file g16.log. Once the calculation is done, line 17 and 20 will be executed to print out the job information and resource usage respectively to the SLURM output file ( with file name: slurm-<JobID>.out).