Skip to content

ANSYS

These instructions are for using Ansys on the current HPCC environment that uses the SLURM scheduler.

License Issues (version 19.2 and older)

There is an issue with Ansys licensing due to changes made over the many versions HPCC has installed.   If you have license issues when Using Ansys, here is the work-around:

1. Start an interactive desktop session in OnDemand or start an X11 terminal (with MobaXterm/Windows or XQuartz/Mac) (see Connect to HPCC System ). 

2. SSH Connect to any development node, remembering to add -X options.   

3.  Start this program on any dev node.   it launches a GUI

/opt/software/ANSYS/19.2/ansys_inc/shared_files/licensing/lic_admin/anslic_admin

  1. On the left side of the window are three buttons. Click the button "set License Preferences for User \<username>". A new window will open

5. select Release 19.2 in that new window and click OK

6. another window will open with tabs across the top and two options in the bottom that are the same for each tab. On the bottom, click the option for "Use a seperate license for each application". It doesn't matter which Tab you've slected (Solver/PrePost/etc). That setting should be the same for all tabs.

7. click OK, which closes that window.

8. In the original Ansys license utility, click File, and then "exit" to close it. This modifies the config file in your home directory.

9. Close any current sessions in which you running Ansys and start it again on any method (dev node, in 'salloc' interactive job etc). You should now be able to use the features you needed before.

Guidelines for scheduling parallel (MPI) jobs

Here are some guidelines for requesting resources.   

Use --ntasks instead of nodes

Note that -N  or -–nodes= will request that number of unique computers, but what most users want is the number of tasks across nodes.  Use the number of tasks you requested instead of number of nodes for the `-t`` parameter:

-t $SLURM_NTASKS

Don't forget to request memory

Request memory per task, and since the default is to have 1 cpu per task, you can request memory using e.g. --mem-per-cpu=1gb

Create a temporary file for node list

Inside the job, Fluent requires a file of a particular format ,and the SLURM node file doesn't work.   This seems to work

1
2
3
4
5
6
# create and save a unique temporary file 
FLUENTNODEFILE=`mktemp`
# fill that tmpfile with node list Fluent can use
scontrol show hostnames > $FLUENTNODEFILE
# in your fluent command, use this parameter
-cnf=$FLUENTNODEFILE

Example fluent Job script (using Intel compiler).   Increase tasks and memory as needed

Ansys/Fluent job

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/usr/bin/bash --login
# example 1 hour job with ntasks across any number of nodes
# adjust the ram and tasks as needed
#SBATCH --time=01:00:00
#SBATCH --cpus-per-task=1
#SBATCH --ntasks=10
#SBATCH --mem-per-cpu=1gb 

echo "This script is from ICER's Ansys/Fluent example"

# create host list
NODEFILE=`mktemp`
scontrol show hostnames > $NODEFILE

# Load the ansys/cfx v19.2 module
module load ANSYS


# The Input file
DEF_FILE=baseline.def  # this file is something you have to provide!


cfx5solve -def $DEF_FILE -parallel -par-dist $NODEFILE -start-method "Platform MPI Distributed Parallel"> cfx5.log

After you have logged into a development nodes with an X11 terminal (or use the OnDemand desktop as described above),  You may run ANSYS tools in parallel and interactively as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# start a approximately 4 hour interactive job with 10 tasks.  Adjust tasks and memory as needed
# you'll have 4 hours to work. You must be in a X11 terminal for this to work

 salloc --ntasks=10 --cpus-per-task=1 --mem-per-cpu=1gb --time=3:59:00 --x11 

# wait for log-in and then...

# load module
ml intel ansys

# this creates a temporary file and fills it with node list Fluent can use
NODEFILE=`mktemp`
scontrol show hostnames > $FLUENTNODEFILE

#for example, run the workbench 
runwb2
 
# after running workbench you can start fluent directly
# note we are using Intel mpi
fluent 3ddp -t $SLURM_NTASKS -mpi=intel -cnf=$FLUENTNODEFILE -ssh

CFX5 Solver

This solver uses a different hosts file format for the par-dist parameter.  The following uses an example Definition file provided by Ansys 19.2. 

The batch script will adapted the par-dist file depending on how you specify tasks and tasks-per-node (the example below does not specify tasks per node).    Code is taken from https://secure.cci.rpi.edu/wiki/index.php?title=CFX.

CFX5 Solver Example sbatch

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#!/usr/bin/bash --login
# example 1 hour job with ntasks across any number of nodes
# adjust the ram and tasks as needed
#SBATCH --time=01:00:00
#SBATCH --cpus-per-task=1 
#SBATCH --ntasks=10
#SBATCH --mem-per-cpu=1gb 

echo "This script is from ICER's Ansys/CFX5 example"

module load ansys

# codde adapts the hosts file depending on if you use multiple nodes and the tasks-per-node option.   
srun hostname -s > /tmp//hosts.$SLURM_JOB_ID
if [ "x$SLURM_NPROCS" = "x" ]; then
    if [ "x$SLURM_NTASKS_PER_NODE" = "x" ];then
     SLURM_NTASKS_PER_NODE=1
     fi
     SLURM_NPROCS=`expr $SLURM_JOB_NUM_NODES \* $SLURM_NTASKS_PER_NODE`
fi
# use ssh instead of rsh
export CFX5RSH=ssh
# format the host list for cfx
cfxHosts=`tr '\n' ',' < /tmp//hosts.$SLURM_JOB_ID`

# example file
DEF=/opt/software/ANSYS/19.2/ansys_inc/v192/CFX/examples/StaticMixer.def
# run the partitioner and solver
cfx5solve -par -par-dist "$cfxHosts" -def $DEF -part $SLURM_NPROCS -start-method "Platform MPI Distributed Parallel"
# cleanup
rm /tmp/hosts.$SLURM_JOB_ID


# output will be in a file named like StaticMixer_001.out and StaticMixer_001.res