These instructions are for using Ansys on the current HPCC environment
that uses the SLURM scheduler.
Guidelines for scheduling parallel (MPI) jobs
Here are some guidelines for requesting resources.
Use --ntasks instead of nodes
Note that -N or -–nodes= will request that number of unique computers,
but what most users want is the number of tasks across nodes.
Use the number of tasks you requested instead of number of nodes for the `-t``
parameter:
-t $SLURM_NTASKS
Don't forget to request memory
Request memory per task, and since the default is to have 1 cpu per
task, you can request memory using e.g. --mem-per-cpu=1gb
Create a temporary file for node list
Inside the job, Fluent requires a file of a particular format ,and the
SLURM node file doesn't work. This seems to work
123456
# create and save a unique temporary file FLUENTNODEFILE=`mktemp`# fill that tmpfile with node list Fluent can use
scontrolshowhostnames>$FLUENTNODEFILE# in your fluent command, use this parameter
-cnf=$FLUENTNODEFILE
Example fluent Job script (using Intel compiler). Increase tasks and
memory as needed
Ansys/Fluent job
1 2 3 4 5 6 7 8 91011121314151617181920212223
#!/usr/bin/bash --login# example 1 hour job with ntasks across any number of nodes# adjust the ram and tasks as needed#SBATCH --time=01:00:00#SBATCH --cpus-per-task=1#SBATCH --ntasks=10#SBATCH --mem-per-cpu=1gb echo"This script is from ICER's Ansys/Fluent example"# create host listNODEFILE=`mktemp`
scontrolshowhostnames>$NODEFILE# Load the ansys/cfx v19.2 module
moduleloadANSYS
# The Input fileDEF_FILE=baseline.def# this file is something you have to provide!
cfx5solve-def$DEF_FILE-parallel-par-dist$NODEFILE-start-method"Platform MPI Distributed Parallel">cfx5.log
After you have logged into a development nodes with an X11 terminal (or
use the OnDemand desktop as described above), You may run ANSYS tools in
parallel and interactively as follows.
1 2 3 4 5 6 7 8 91011121314151617181920
# start a approximately 4 hour interactive job with 10 tasks. Adjust tasks and memory as needed# you'll have 4 hours to work. You must be in a X11 terminal for this to worksalloc--ntasks=10--cpus-per-task=1--mem-per-cpu=1gb--time=3:59:00--x11# wait for log-in and then...# load module
mlintelansys
# this creates a temporary file and fills it with node list Fluent can useNODEFILE=`mktemp`
scontrolshowhostnames>$FLUENTNODEFILE#for example, run the workbench
runwb2
# after running workbench you can start fluent directly# note we are using Intel mpi
fluent3ddp-t$SLURM_NTASKS-mpi=intel-cnf=$FLUENTNODEFILE-ssh
CFX5 Solver
This solver uses a different hosts file format for the par-dist
parameter. The following uses an example Definition file provided by
Ansys 19.2.
The batch script will adapted the par-dist file depending on how you
specify tasks and tasks-per-node (the example below does not specify
tasks per node). Code is taken from
https://secure.cci.rpi.edu/wiki/index.php?title=CFX.
#!/usr/bin/bash --login# example 1 hour job with ntasks across any number of nodes# adjust the ram and tasks as needed#SBATCH --time=01:00:00#SBATCH --cpus-per-task=1 #SBATCH --ntasks=10#SBATCH --mem-per-cpu=1gb echo"This script is from ICER's Ansys/CFX5 example"
moduleloadansys
# code adapts the hosts file depending on if you use multiple nodes and the tasks-per-node option.
srunhostname-s>/tmp//hosts.$SLURM_JOB_IDif["x$SLURM_NPROCS"="x"];thenif["x$SLURM_NTASKS_PER_NODE"="x"];thenSLURM_NTASKS_PER_NODE=1fiSLURM_NPROCS=`expr$SLURM_JOB_NUM_NODES\*$SLURM_NTASKS_PER_NODE`fi# use ssh instead of rshexportCFX5RSH=ssh
# format the host list for cfxcfxHosts=`tr'\n'','</tmp//hosts.$SLURM_JOB_ID`# example fileDEF=/opt/software/ANSYS/19.2/ansys_inc/v192/CFX/examples/StaticMixer.def
# run the partitioner and solver
cfx5solve-par-par-dist"$cfxHosts"-def$DEF-part$SLURM_NPROCS-start-method"Platform MPI Distributed Parallel"# cleanup
rm/tmp/hosts.$SLURM_JOB_ID# output will be in a file named like StaticMixer_001.out and StaticMixer_001.res