Backwards Compatibility with CentOS
Early testing!
Please note that the advice and commands on this page are still being tested. Expect the possibility for bugs and errors, and always test for accuracy by comparing against commands you know already work.
If you discover a problem with anything on this page, please contact us so we can work to resolve it.
As ICER upgrades the operating system on the HPCC to Ubuntu, we are providing limited support for running code based on the current CentOS system. This page outlines a few scripts and tips to help you run your code through a backwards compatibility "container".
For an overview of containers see our Containers Overview page. Knowledge of containers is helpful, but not required to use the helper scripts provided.
MPI and multi-node jobs are not supported
Running multiple tasks with MPI is difficult to replicate using containers. As such, the scripts and instructions given here have not been tested for MPI jobs and likely will not work.
That being said, running MPI with containers is possible with some extra configuration. For more information, see the Singularity documentation or this tutorial from Pawsey Supercompting.
Automatically submitting batch scripts with sbatch_old
We have provided a powertool
called sbatch_old
. This tool will submit all of the commands in your current SLURM batch scripts through the compatibility container. In particular, it is possible to access all modules available on CentOS using their original name. To use it, replace sbatch
with sbatch_old
when submitting your script to any Ubuntu node.
Using the SLURM --export
flag
To use the --export
option to send environment variables to your SLURM script with sbatch_old
, you must replace your environment variable name VAR
with SINGULARITYENV_VAR
. For example, if a script was submitted on the old OS using
1 |
|
you would use it with sbatch_old
by running
1 |
|
Example
Consider the following SLURM batch script:
my_script.sb | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
This job can be run on any CentOS
node with the command:
CentOS | |
---|---|
1 |
|
This job will fail on Ubuntu because R/4.0.2
is not available (and even if it were, the name would be different). Instead, use the sbatch_old
command:
Ubuntu | |
---|---|
1 2 |
|
This script should work on jobs that request at most one node and do not use MPI. Jobs using GPUs should still continue to work.
Interactively using the backwards compatibility container with old_os
To run commands interactively using the compatibility container, run the command old_os
on any Ubuntu node. This will replace your command line with one running inside the compatibility container, and you will have access to the majority of the commands on the CentOS system.
Example
Suppose you want to test compiling and running GPU code. On a CentOS node your session might look like
CentOS commands | |
---|---|
1 2 3 4 5 |
|
However, running this on Ubuntu will not work, because gcccuda/2019
is not available. You also cannot guarantee that ./Hello_CUDA
will still work on Ubuntu because the compilers and libraries used to compile it on CentOS are either different or nonexistent on Ubuntu. To get around, this you can start an interactive session in the compatibility container.
Note that this will need to run on a node with a GPU, e.g., dev-amd20-v100
.
Ubuntu commands | |
---|---|
1 2 3 4 5 6 7 8 |
|
Advanced: Manually using the container
If you are comfortable using Singularity directly and would like a more flexible workflow, e.g., to experiment with MPI jobs, you can also invoke the backwards compatibility container directly.
We recommend a invoking singularity
with the following options:
1 2 3 4 5 6 7 |
|
where <command>
is one of shell
or exec
. See less $(which sbatch_old)
for an example.