Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Excerpt

Singularity is a container platform: it lets you create and run containers that package up pieces of software in a way that is portable and reproducible. With a few basic commands, you can set up a workflow to run on Pawsey systems using Singularity. This page introduces how to get or build a Singularity container image and run that image on HPC systems.

...

Column
width900px


Section


Panel
titleOn this page:

Table of Contents



Prerequisites

Familiarity with:

Getting container images and initialising Singularity

Check container availability and load the module

Singularity is installed on most Pawsey systems. Use module commands to check availability and the version installed:

$ module avail singularity

...

Versions installed in Pawsey systems

To check the current installed versions, use the module avail command (current versions may be different from content shown here):

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 1. Checking for installed versions
$ module avail singularity
------------------------------------ /software/setonix/2024.05/pawsey/modules -------------------------------------
   singularity/4.1.0-askap-gpu    singularity/4.1.0-mpi       singularity/4.1.0-slurm (D)
   singularity/4.1.0-askap        singularity/4.1.0-nohost
   singularity/4.1.0-mpi-gpu      singularity/4.1.0-nompi


Different "flavours" of singularity are identified by the suffix beyond the version number. A detailed description of the different flavours is provided in the sections below.

Getting container images and initialising Singularity

Check container availability and load the module

Singularity is installed on most Pawsey systems. Use module commands to check availability and the version installed:

$ module avail singularity

and then load the singularity module (for applications that do not need mpi, like for many of the bioinformatics containers):

$ module load singularity/

...

4.

...

1.

...

0-nompi

Or, for applications that need mpi.

$ module load singularity/

...

4.

...

1.

...

0-mpi

To avoid errors when downloading and running containers, run the sequence of commands in the following terminal display:

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 1. Setting ownership and permission for the Singularity cache directory
$ mkdir -p /software/projects/<project-id>/<user-name>/.singularity
$ chown -hR $USER:$PAWSEY_PROJECT /software/projects/<project-id>/<user-name>/.singularity
$ find /software/projects/<project-id>/<user-name>/.singularity -type d -exec chmod g+s {} \;


Pull or build a container image

To provide the image that you want to run, either pull an existing container image or build a container image. 

...

salloc -n 1 -t 4:00:00 -I 

Pull an existing image from a container library

You can pull existing containers from a suitable registry such as Docker Hub, Biocontainers, RedHat Quay or Sylabs Container Library. For most users, this will be the most common way you will use containers. It's a good idea to check what containers are already available before deciding to build your own container. 

To import Docker images from, for example, Docker Hub, you can use the  singularity pull  command. As Docker images are written in layers, Singularity pulls the layers instead of just downloading the image, then combines them into a Singularity SIF format container. 

$ singularity pull --dir $MYSCRATCH$MYSOFTWARE/singularity/myRepository docker://user/image:tag

...

  • The --dir flag specifies the image to be downloaded to a location
  • docker://  indicates that you're pulling from the Docker Hub registry 
  • user is the hub user 
  • image is the image or repository name you're pulling
  • tag is the Docker Hub tag that identifies which image to pull

Build a container image

To build a container image, we recommend using Docker, either on a local laptop or workstation or on a cloud virtual machine. For example, the Pawsey Nimbus Cloud has Ubuntu installations that come with both Singularity and Docker pre-installed. You cannot build a container image on Setonix because you will not have admin/sudo privileges. 

...

Then this SIF file can be transferred to Pawsey systems.

Best practices for building and maintaining images 

Building images
Anchor
buildtips
buildtips

  • Minimize image size
    • Each distinct instruction (such as RUN, CMD, etc) in the Dockerfile generates another layer in the container, increasing its size
    • To minimize image size, use multi-line commands, and clean up package manager caches.

  • Avoid software bloat

    • Only install the software you need for a given application into a container.
  • Make containers modular
    • Creating giant, monolithic containers with every possible application you could need is bad practice. It increases image size, reduces performance, and increases complexity. Containers should only contain a few applications (ideally only one) that you'll use. You can chain together workflows that use multiple containers, meaning if you need to change a particular portion you only need to update a single, small container.

...

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 1. Docker file snippet highlighting best practicies
# install packages on debian/ubuntu linux os container using apt-get command
RUN apt-get update \
	&& apt-get install -y \
       autoconf \
       automake \
       gcc \
       g++ \
       python \
       python-dev \
    && apt-get clean all \
    && rm -rf /var/lib/apt/lists/*


Managing your Singularity images

Unlike Docker containers, Singularity containers can be managed as simple files. We recommend that projects keep their Singularity containers in a small number of specific directories. For example, each user might store all of their own Singularity container .sif files in a repository directory such as $MYSCRATCH$MYSOFTWARE/singularity/myRepository. For containers that will be used by several users in the group, we recommend that the repository be maintained as a shared directory, such as /scratch/$PAWSEY_PROJECT/singularity/groupRepository.

...

$ singularity cache clean -f

Running jobs with Singularity

Job scripts require minimal modifications to run within a Singularity container. All that is needed is the singularity exec statement followed by the image name and then the name of the command to be run. Listing 2 shows an example script:

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 2. Modifying a Slurm job to run with Singularity
linenumberstrue
#!/bin/bash -l
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=1840M
#SBATCH --time=00:10:00
#SBATCH --partition=work

# load the singularity module which provides 
# the executable and also sets some environment variables
# (In this case, as no mpi is needed:)
module load singularity/3.8.6<VERSION>-nompi

# define some useful environment variables for the container
# this would need to be updated for a users workflow
export myRepository=$MYSCRATCH$MYSOFTWARE/singularity/myRepository
export containerImage=$myRepository/image.sif
# define command to be run. In this example we use ls
export mycommand=ls

# run the container with 
srun -N 1 -n 1 -c 1 singularity exec ${containerImage} ${mycommand}


...

Column

$ sbatch --account=<your-pawsey-project> singularity_job_script.sh

Bind mounting host directories

The Singularity configuration at Pawsey takes care of always bind mounting the scratch filesystem for you. You can mount additional host directories to the container with the following syntax:

...

Column


Tip
titleUsing a fake home

A number of programs assume the existence of $HOME  (which is the directory /home/<username>  ). We recommend that users bind mount a writable directory to home as a fake home.

$ singularity exec -B /path/to/fake/home:${HOME} <image name> <command>


Sample use cases

We discuss several common use cases for containers that require some care. Each example shown below highlights the use of particular container features.

Running Python and R

For Singularity containers that have Python or R built-in, use the flag -e  (clean environment) to run the container with an isolated shell environment. This is because both Python and R make extensive use of environment variables and not using a fresh environment can pollute the container environment with pre-existing variables. If you need to read or write from a local directory, you may use the -e flag in conjunction with the -B flag.

...

$ unset $( env | grep ^PYTHON | cut -d = -f 1 | xargs )
$ srun singularity run docker://python:3.8 my_script.py

Using GPUs

Singularity allows users to make use of GPUs within their containers, for both NVIDIA and AMD GPUs. Nimbus uses NVIDIA GPUs, while Setonix uses AMD GPUs. To enable NVIDIA support, add the runtime flag --nv. To use AMD GPUs, add the --rocm flag to your singularity command instead of --nv

Listing 3 shows an example of running Gromacs on GPUs through NVIDIA containersrocm capable container:

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 3. Enabling a container to use GPUs
collapsetrue
#!/bin/bash -l
#SBATCH --job-name=gpu
#SBATCH --gres=gpu:1
#SBATCH --ntasksnodes=1
#SBATCH --time=01:00:00

# Load Singularity
module load singularity/3.8.6<VERSION>-mpi-gpu

# Define the container to use
export myRepository=$MYSCRATCH$MYSOFTWARE/singularity/myRepository
export containerImage=$myRepository/gromacs_2018.2.sif

# Run Gromacs preliminary step with container
srun singularity exec --rocm $containerImage gmx grompp -f pme.mdp

# Run Gromacs MD with container
srun singularity exec --rocm $containerImage \
    					   gmx mdrun -ntmpi 1 -nb gpu -pin on -v \
                           -noconfout -nsteps 5000 -s topol.tpr -ntomp 1


...

$ sbatch --account=<your-pawsey-project> --partition=gpu gpu.sh-gpu --partition=gpu gpu.sh

For more information on how to use GPU partitions on Setonix see: Example Slurm Batch Scripts for Setonix on GPU Compute Nodes.

Using MPI

MPI applications can be run within Singularity containers. There are two requirements to do so:

...

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 4. Running an MPI application within a Singularity container
collapsetrue
#!/bin/bash -l

#SBATCH --account=projectID
#SBATCH --job-name=mpi-OpenFOAMcase
#SBATCH --ntasks=512
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=00:20:00
#SBATCH --partition=work

# ---
# load Singularity
module load singularity/3.8.6<VERSION>-mpi

# ---
# Note we avoid any inadvertent OpenMP threading by setting OMP_NUM_THREADS=1
export OMP_NUM_THREADS=1

# ---
# Set MPI related environment variables for your system (see specific system guides).
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1

# ---
# Define the container to use
export theRepository=<pathToPersonalImages>
export containerImage=$theRepository/openfoam-v1912.sif

# ---
# cd into the case directory
cd $MYSCRATCH/myOpenFOAMCase

# ---
# execute the simpleFoam parallel solver with MPI
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c $OMP_NUM_THREADS \
     singularity exec $containerImage simpleFoam -parallel -fileHandler collated | tee log.simpleFoam

Notes:

  • srun is the Slurm wrapper for the MPI launcher.
  • -n is the flag to indicate the number of processes to be run (512 in this case, as it is the value of the Slurm variable SLURM_NTASKS).
  • Singularity modules at Pawsey have been prepared to automatically set up the right bind mounts of the host MPI paths into the SINGULARITY_BINDPATH and SINGULARITYENV_LD_LIBRARY_PATH variables, so no further action is needed for bind mounting.
  • -parallel  and -fileHandler collated flags are part of OpenFOAM. They are not Singularity or Slurm flags.

Singularity flavours on Pawsey Systems

Depending on the cluster, different Singularity modules may be available:

Cluster

singularity/VV-mpisingularity/VV-mpi-gpusingularity/VV-nompi

singularity-openmpi

singularity-openmpi-gpu

Setonix (HPE Cray Ex)XyesXGarrawarla (GPU)XXXyesyesnono

These modules differ on the flavour of the MPI library they bind mount in the containers at runtime, and on whether or not they also bind mount the required libraries for CUDA-aware MPI:

...

GPU-aware MPI:

  • singularity/VV-mpi:          Cray MPI (Setonix) or Intel MPI (other clusters). All ABI compatible with MPICH
  • singularity/VV-mpi-gpu: Cray MPI (Setonix) or Intel MPI (othersother clusters). All ABI compatible with MPICH. With GPU-aware MPI.
  • singularity/VV-nompi:     For applications that do not require mpi communications (commonly Bioinformatics applications)
  • singularity/VV-openmpi: OpenMPIsingularity-openmpi-gpu: OpenMPI built with CUDA support and any other libraries required by CUDA-aware MPI (for example: gdrcopynohost:   For applications that require total isolation from host environment (commonly Bioinformatics applications)

Features of the modules

These singularity modules set important environment variables to provide a smoother and more efficient user experience. Modules set several key environment variables

...

To ensure that container images are portability, Pawsey provided containers keep host libraries to a minimum. The only case currently supported by Pawsey is mounting of interconnect/MPI libraries, to maximise performance of inter-node communication for MPI and CUDA-aware MPI enabled applications.

Related pages

External links