Skip to content

Backwards Compatibility with CentOS

Early testing!

Please note that the advice and commands on this page are still being tested. Expect the possibility for bugs and errors, and always test for accuracy by comparing against commands you know already work.

If you discover a problem with anything on this page, please contact us so we can work to resolve it.

As ICER upgrades the operating system on the HPCC to Ubuntu, we are providing limited support for running code based on the current CentOS system. This page outlines a few scripts and tips to help you run your code through a backwards compatibility "container".

For an overview of containers see our Containers Overview page. Knowledge of containers is helpful, but not required to use the helper scripts provided.

MPI and multi-node jobs are not supported

Running multiple tasks with MPI is difficult to replicate using containers. As such, the scripts and instructions given here have not been tested for MPI jobs and likely will not work.

That being said, running MPI with containers is possible with some extra configuration. For more information, see the Singularity documentation or this tutorial from Pawsey Supercompting.

Automatically submitting batch scripts with sbatch_old

We have provided a powertool called sbatch_old. This tool will submit all of the commands in your current SLURM batch scripts through the compatibility container. In particular, it is possible to access all modules available on CentOS using their original name. To use it, replace sbatch with sbatch_old when submitting your script to any Ubuntu node.

Example

Consider the following SLURM batch script:

my_script.sb
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
#!/bin/bash --login
#SBATCH --job-name=Rscript
#SBATCH --ntasks=1
#SBATCH --mem=20M
#SBATCH --time=01:00:00
#SBATCH --mail-type=ALL
#SBATCH --mail-user=yournetid@msu.edu
#SBATCH --output=%x-%j.SLURMout

# Purge current modules and load those we require
module purge
module load GCC/8.3.0 OpenMPI/3.1.4 R/4.0.2 powertools

# Run our job
cd /mnt/home/user123
srun Rscript myscript.R

# Print resource information
scontrol show job $SLURM_JOB_ID
js -j $SLURM_JOB_ID

This job can be run on any CentOS node with the command:

CentOS
1
sbatch my_script.sb

This job will fail on Ubuntu because R/4.0.2 is not available (and even if it were, the name would be different). Instead, use the sbatch_old command:

Ubuntu
1
2
module load powertools
sbatch_old my_script.sb

This script should work on jobs that request at most one node and do not use MPI. Jobs using GPUs should still continue to work.

Interactively using the backwards compatibility container with old_os

To run commands interactively using the compatibility container, run the command old_os on any Ubuntu node. This will replace your command line with one running inside the compatibility container, and you will have access to the majority of the commands on the CentOS system.

Example

Suppose you want to test compiling and running GPU code. On a CentOS node your session might look like

CentOS commands
1
2
3
4
5
getexample helloCUDA
cd helloCUDA
module load gcccuda/2019
nvcc Hello.cu -o Hello_CUDA
./Hello_CUDA

However, running this on Ubuntu will not work, because gcccuda/2019 is not available. You also cannot guarantee that ./Hello_CUDA will still work on Ubuntu because the compilers and libraries used to compile it on CentOS are either different or nonexistent on Ubuntu. To get around, this you can start an interactive session in the compatibility container.

Note that this will need to run on a node with a GPU, e.g., dev-amd20-v100.

Ubuntu commands
1
2
3
4
5
6
7
8
module load powertools
old_os  # All following commands are in CentOS compatibility container
./Hello_CUDA  # Works
rm Hello_CUDA
module load gcccuda/2019  # Recompiling with CentOS modules
nvcc Hello.cu -o Hello_CUDA
./Hello_CUDA
exit  # Go back to Ubuntu

Advanced: Manually using the container

If you are comfortable using Singularity directly and would like a more flexible workflow, e.g., to experiment with MPI jobs, you can also invoke the backwards compatibility container directly.

We recommend a invoking singularity with the following options:

1
2
3
4
5
6
7
singularity <command> \
    --bind /opt/.software-legacy:/opt/software  # Access module system
    --bind /opt/.modules-legacy:/opt/modules
    --bind /cvmfs  # Access software provided by CVMFS endpoints (Icecube, CERN, etc.)
    --cleanenv  # Ensure module system from Ubuntu doesn't conflict
    --nv  # If using a GPU
    /mnt/research/helpdesk/ubuntu-compute/centos79.sif  # Use most recent image

where <command> is one of shell or exec. See less $(which sbatch_old) for an example.