Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Excerpt

Singularity is a container platform: it lets you create and run containers that package up pieces of software in a way that is portable and reproducible. With a few basic commands, you can set up a workflow to run on Pawsey systems using Singularity. This page introduces how to get or build a Singularity container image and run that image on HPC systems.

...

Familiarity with:

Versions installed in Pawsey systems

To check the current installed versions, use the module avail command (current versions may be different from content shown here):

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 1. Checking for installed versions
$ module avail singularity
------------------------------------ /software/setonix/2024.05/pawsey/modules -------------------------------------
   singularity/4.1.0-askap-gpu    singularity/4.1.0-mpi       singularity/4.1.0-slurm (D)
   singularity/4.1.0-askap        singularity/4.1.0-nohost
   singularity/4.1.0-mpi-gpu      singularity/4.1.0-nompi


Different "flavours" of singularity are identified by the suffix beyond the version number. A detailed description of the different flavours is provided in the sections below.

Getting container images and initialising Singularity

...

and then load the singularity module (for applications that do not need mpi, like for many of the bioinformatics containers):

$ module load singularity/4.1.0-nompi

Or, for applications that need mpi.

$ module load singularity/4.1.0-mpi

To avoid errors when downloading and running containers, run the sequence of commands in the following terminal display:

...

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 2. Modifying a Slurm job to run with Singularity
linenumberstrue
#!/bin/bash -l
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=1840M
#SBATCH --time=00:10:00
#SBATCH --partition=work

# load the singularity module which provides 
# the executable and also sets some environment variables
# (In this case, as no mpi is needed:)
module load singularity/4.1.0<VERSION>-nompi

# define some useful environment variables for the container
# this would need to be updated for a users workflow
export myRepository=$MYSOFTWARE/singularity/myRepository
export containerImage=$myRepository/image.sif
# define command to be run. In this example we use ls
export mycommand=ls

# run the container with 
srun -N 1 -n 1 -c 1 singularity exec ${containerImage} ${mycommand}


...

Listing 3 shows an example of running Gromacs on GPUs through NVIDIA containersrocm capable container:

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 3. Enabling a container to use GPUs
collapsetrue
#!/bin/bash -l
#SBATCH --job-name=gpu
#SBATCH --gres=gpu:1
#SBATCH --ntasksnodes=1
#SBATCH --time=01:00:00

# Load Singularity
module load singularity/4.1.0<VERSION>-mpi-gpu

# Define the container to use
export myRepository=$MYSOFTWARE/singularity/myRepository
export containerImage=$myRepository/gromacs_2018.2.sif

# Run Gromacs preliminary step with container
srun singularity exec --rocm $containerImage gmx grompp -f pme.mdp

# Run Gromacs MD with container
srun singularity exec --rocm $containerImage \
    					   gmx mdrun -ntmpi 1 -nb gpu -pin on -v \
                           -noconfout -nsteps 5000 -s topol.tpr -ntomp 1


...

$ sbatch --account=<your-pawsey-project> --partition=gpu gpu.sh-gpu --partition=gpu gpu.sh

For more information on how to use GPU partitions on Setonix see: Example Slurm Batch Scripts for Setonix on GPU Compute Nodes.

Using MPI

MPI applications can be run within Singularity containers. There are two requirements to do so:

...

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 4. Running an MPI application within a Singularity container
collapsetrue
#!/bin/bash -l

#SBATCH --account=projectID
#SBATCH --job-name=mpi-OpenFOAMcase
#SBATCH --ntasks=512
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=00:20:00
#SBATCH --partition=work

# ---
# load Singularity
module load singularity/4.1.0<VERSION>-mpi

# ---
# Note we avoid any inadvertent OpenMP threading by setting OMP_NUM_THREADS=1
export OMP_NUM_THREADS=1

# ---
# Set MPI related environment variables for your system (see specific system guides).
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1

# ---
# Define the container to use
export theRepository=<pathToPersonalImages>
export containerImage=$theRepository/openfoam-v1912.sif

# ---
# cd into the case directory
cd $MYSCRATCH/myOpenFOAMCase

# ---
# execute the simpleFoam parallel solver with MPI
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c $OMP_NUM_THREADS \
     singularity exec $containerImage simpleFoam -parallel -fileHandler collated | tee log.simpleFoam

Notes:

  • srun is the Slurm wrapper for the MPI launcher.
  • -n is the flag to indicate the number of processes to be run (512 in this case, as it is the value of the Slurm variable SLURM_NTASKS).
  • Singularity modules at Pawsey have been prepared to automatically set up the right bind mounts of the host MPI paths into the SINGULARITY_BINDPATH and SINGULARITYENV_LD_LIBRARY_PATH variables, so no further action is needed for bind mounting.
  • -parallel  and -fileHandler collated flags are part of OpenFOAM. They are not Singularity or Slurm flags.

Singularity flavours on Pawsey Systems

Depending on the cluster, different Singularity modules may be available:

Cluster

singularity/VV-mpisingularity/VV-mpi-gpusingularity/VV-nompi

singularity-openmpi

singularity-openmpi-gpu

Setonix (HPE Cray Ex)XyesXGarrawarla (GPU)XXXyesyesnono

These modules differ on the flavour of the MPI library they bind mount in the containers at runtime, and on whether or not they also bind mount the required libraries for CUDA-aware MPI:

...

GPU-aware MPI:

  • singularity/VV-mpi:          Cray MPI (Setonix) or Intel MPI (other clusters). All ABI compatible with MPICH
  • singularity/VV-mpi-gpu: Cray MPI (Setonix) or Intel MPI (othersother clusters). All ABI compatible with MPICH. With GPU-aware MPI.
  • singularity/VV-nompi:     For applications that do not require mpi communications (commonly Bioinformatics applications)
  • singularity/VV-openmpi: OpenMPIsingularity-openmpi-gpu: OpenMPI built with CUDA support and any other libraries required by CUDA-aware MPI (for example: gdrcopynohost:   For applications that require total isolation from host environment (commonly Bioinformatics applications)

Features of the modules

These singularity modules set important environment variables to provide a smoother and more efficient user experience. Modules set several key environment variables

...

External links