Singularity is a container platform: it lets you create and run containers that package up pieces of software in a way that is portable and reproducible. With a few basic commands, you can set up a workflow to run on Pawsey systems using Singularity. This page introduces how to get or build a Singularity container image and run that image on HPC systems.
Prerequisites
Familiarity with:
Getting container images and initialising Singularity
Check container availability and load the module
Singularity is installed on most Pawsey systems. Use module
commands to check availability and the version installed:
$ module avail singularity
and then load the module:
$ module load singularity/3.8.6-mpi
for applications that need mpi.
Or, for applications that do not need mpi (like for many of the bioinformatics containers):
$ module load singularity/3.8.6-nompi
To avoid errors when downloading and running containers, run the sequence of commands in the following terminal display:
$ mkdir -p /software/projects/<project-id>/<user-name>/.singularity $ chown -hR $USER:$PAWSEY_PROJECT /software/projects/<project-id>/<user-name>/.singularity $ find /software/projects/<project-id>/<user-name>/.singularity -type d -exec chmod g+s {} \;
Pull or build a container image
To provide the image that you want to run, either pull an existing container image or build a container image.
Pull containers on the compute nodes. This is particularly important for larger images because the compute nodes will perform better than the shared login nodes.
If you are developing a container, submit a Slurm interactive job allocation for a longer period of time than normally required to accommodate the download time needed for the container image. For example, to ask for 4 hours:
salloc -n 1 -t 4:00:00 -I
Pull an existing image from a container library
You can pull existing containers from a suitable registry such as Docker Hub, Biocontainers, RedHat Quay or Sylabs Container Library.
To import Docker images from, for example, Docker Hub, you can use the singularity pull
command. As Docker images are written in layers, Singularity pulls the layers instead of just downloading the image, then combines them into a Singularity SIF format container.
$ singularity pull --dir $MYSCRATCH/singularity/myRepository docker://user/image:tag
In this example command:
The --dir
flag specifies the image to be downloaded to a locationdocker://
indicates that you're pulling from the Docker Hub registryuser
is the hub userimage
is the image or repository name you're pullingtag
is the Docker Hub tag that identifies which image to pull
Build a container image
To build a container image, we recommend using Docker, either on a local laptop or workstation or on a cloud virtual machine. For example, the Pawsey Nimbus Cloud has Ubuntu installations that come with both Singularity and Docker pre-installed.
Docker is recommended for:
- Compatibility, portability and shareability: Docker images can be run by any container runtime, while Singularity images can only be run by Singularity.
- Ease of development: layer caching in Docker may significantly speed up the process of performing repeated image builds. In addition, Docker allows writing in containers by default, allowing for tests on the fly.
- Community adoption: community experience and know-how in writing good image recipes focuses on Docker and Dockerfiles.
Information on Dockerfile syntax can be found at Dockerfile reference (external site).
The following commands are meant to be run on a local computer or a cloud virtual machine. They cannot be run on Pawsey systems.
Once you've written a Dockerfile, you can use it to build a container image.
$ sudo docker build -t image:tag .
If you have Singularity installed in the same machine, you can convert the Docker image into the Singularity SIF format.
$ singularity pull image_tag.sif docker-daemon:image:tag
Then this SIF file can be transferred to Pawsey systems.
Best practices for building and maintaining images
Building images
- Minimize image size
- Each distinct instruction (such as
RUN
) in the Dockerfile generates another layer in the container, increasing its size To minimize image size, use multi-line commands, and clean up package manager caches.
- Each distinct instruction (such as
Avoid software bloat
- Only install the software you need for a given application into a container.
- Make containers modular
- Creating giant, monolithic containers with every possible application you could need is bad practice. It increases image size, reduces performance, and increases complexity. Containers should only contain a few applications (ideally only one) that you'll use. You can chain together workflows that use multiple containers, meaning if you need to change a particular portion you only need to update a single, small container.
There are websites which provide detailed instructions for writing good Docker recipes, such as Best practices for writing Dockerfiles (external site). A simple snippet is provided in listing 1:
# install packages on debian/ubuntu linux os container using apt-get command RUN apt-get update \ && apt-get install -y \ autoconf \ automake \ gcc \ g++ \ python \ python-dev \ && apt-get clean all \ && rm -rf /var/lib/apt/lists/*
Managing your Singularity images
Unlike Docker containers, Singularity containers can be managed as simple files. We recommend that projects keep their Singularity containers in a small number of specific directories. For example, each user might store all of their own Singularity container .sif
files in a repository directory such as $MYSCRATCH/singularity/myRepository
. For containers that will be used by several users in the group, we recommend that the repository be maintained as a shared directory, such as /scratch/$PAWSEY_PROJECT/singularity/groupRepository
.
When pulling Singularity images, many files and a copy of the images themselves are saved in the cache. Singularity modules at Pawsey define the cache location as $MYSCRATCH/.singularity/cache
. This is to avoid problems with the restricted quota of /home
, which is the default Singularity cache location.
To see all of the copies of the images that currently exist in the cache, use the singularity cache list
command.
$ singularity cache list NAME DATE CREATED SIZE TYPE ubuntu_latest.sif 2019-10-21 13:19:50 28.11 MB library ubuntu_18.04.sif 2019-10-21 13:19:04 37.10 MB library ubuntu_18.04.sif 2019-10-21 13:19:40 25.89 MB oci There are 3 container file(s) using: 91.10 MB, 6 oci blob file(s) using 26.73 MB of space. Total space used: 117.83 MB
When you have finished building or pulling your containers, clean the cache. To wipe everything use the -f
flag:
$ singularity cache clean -f
Running jobs with Singularity
Job scripts require minimal modifications to run within a Singularity container. All that is needed is the
singularity exec
statement followed by the image name and then the name of the command to be run. Listing 2 shows an example script:
#!/bin/bash -l #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 #SBATCH --mem=1840M #SBATCH --time=00:10:00 #SBATCH --partition=work # load the singularity module which provides # the executable and also sets some environment variables # (In this case, as no mpi is needed:) module load singularity/3.8.6-nompi # define some useful environment variables for the container # this would need to be updated for a users workflow export myRepository=$MYSCRATCH/singularity/myRepository export containerImage=$myRepository/image.sif # define command to be run. In this example we use ls export mycommand=ls # run the container with srun -N 1 -n 1 -c 1 singularity exec ${containerImage} ${mycommand}
Then submit the script to Slurm as follows:
$ sbatch --account=<your-pawsey-project> singularity_job_script.sh
Bind mounting host directories
The Singularity configuration at Pawsey takes care of always bind mounting the scratch filesystem for you. You can mount additional host directories to the container with the following syntax:
$ singularity exec -B /path/to/host/directory:/path/in/container <image name> <command>
Notes:
singularity exec
allows you to execute the container with a specific command that is placed at the end of the string.-B
is the flag for bind mounting the directory to the container.- You can either remove
:/path/in/container
or use:/home
if you do not know the path in the container that you require to run the command from.
Using a fake home
A number of programs assume the existence of $HOME
(which is the directory /home/<username>
). We recommend that users bind mount a writable directory to home as a fake home.
$ singularity exec -B /path/to/fake/home:${HOME} <image name> <command>
Sample use cases
We discuss several common use cases for containers that require some care. Each example shown below highlights the use of particular container features.
Running Python and R
For Singularity containers that have Python or R built-in, use the flag
-e
(clean environment) to run the container with an isolated shell environment. This is because both Python and R make extensive use of environment variables and not using a fresh environment can pollute the container environment with pre-existing variables. If you need to read or write from a local directory, you may use the -e
flag in conjunction with the -B
flag.
$ singularity run -e docker://rocker/tidyverse
$ singularity run -B /path/to/host/directory:/path/in/container,/path/to/fake/home:${HOME} -e docker://rocker/tidyverse
There can be specific cases where isolating the shell environment is not feasible, for instance if you're running MPI+Python code, which needs to access scheduler environment variables. Here, a possible workaround is to unset all Python-related variables in the host shell environment and then proceed to execute the container as usual.
$ unset $( env | grep ^PYTHON | cut -d = -f 1 | xargs )
$ srun singularity run docker://python:3.8 my_script.py
Using GPUs
Singularity allows users to make use of GPUs within their containers, by adding the runtime flag --nv
(enable NVIDIA support).
Listing 3 shows an example of running Gromacs, a popular molecular dynamics package, among the ones that have been optimised to run on GPUs through NVIDIA containers:
The bash script can then be submitted to Slurm as follows:
$ sbatch --account=<your-pawsey-project> --partition=gpuq gpu.sh
Using MPI
MPI applications can be run within Singularity containers. There are two requirements to do so:
- A host MPI installation is required to spawn MPI processes. All Pawsey systems have installed at least one MPICH Application Binary Interface (ABI) compatible implementation; non-Cray clusters also have OpenMPI.
- In the container, an ABI-compatible MPI installation is required to compile the application. Pawsey maintains MPI base images on both DockerHub and RedHat Quay.
Below is an example of a SLURM batch script for using OpenFoam for Computational Fluid Dynamics simulations, which has been built from the MPICH pawsey/mpich-base container and adding the compilation of OpenFOAM.
Use the following script to run the simpleFoam
parallel solver (it is assumed that usual preprocessing of the OpenFOAM case has been already performed):
Notes:
srun
is the Slurm wrapper for the MPI launcher.-n
is the flag to indicate the number of processes to be run (512 in this case, as it is the value of the Slurm variableSLURM_NTASKS
).- Singularity modules at Pawsey have been prepared to automatically set up the right bind mounts of the host MPI paths into the
SINGULARITY_BINDPATH
andSINGULARITYENV_LD_LIBRARY_PATH
variables, so no further action is needed for bind mounting. -parallel
and-fileHandler collated
flags are part of OpenFOAM. They are not Singularity or Slurm flags.
Singularity on Pawsey Systems
Depending on the cluster, different Singularity modules may be available:
Cluster |
| singularity/VV-mpi | singularity/VV-nompi |
|
|
---|---|---|---|---|---|
Setonix (HPE Cray Ex) | X | X | |||
Topaz (GPU) | X | X | X | ||
Garrawarla (GPU) | X | X | X |
These modules differ on the flavour of the MPI library they bind mount in the containers at runtime, and on whether or not they also bind mount the required libraries for CUDA-aware MPI:
singularity, singularity/VV-mpi
: Cray MPI (Setonix) or Intel MPI (others). All ABI compatible with MPICHsingularity/VV-nompi:
For applications that do not require mpi communications (commonly Bioinformatics applications)singularity-openmpi
: OpenMPIsingularity-openmpi-gpu
: OpenMPI built with CUDA support and any other libraries required by CUDA-aware MPI (for example:gdrcopy
)
Features of the modules
These singularity
modules set important environment variables to provide a smoother and more efficient user experience. Modules set several key environment variables
SINGULARITY_BINDPATH
: Defines the default paths bind mounted into containers.SINGULARITYENV_LD_LIBRARY_PATH
: Defines the default paths to search for dynamic library.SINGULARITYENV_LD_PRELOAD
: Defines additional dynamic libraries to load when containerised applications are executed.SINGULARITY_CACHEDIR
: Defines the location where Singularity stores intermediate blobs and full images that are download by users. The default would be to store them in the user's HOME, which would not work due to the 1 GB quota. This is updated to include/software/<project-id>/<user-name>/.singularity
, to provide a viable storage location for container images (this may change in newer modules).
The module updates SINGULARITY_BINDPATH,
SINGULARITYENV_LD_LIBRARY_PATH
and SINGULARITYENV_LD_PRELOAD
in the following ways:
- Include common critical directories such as
/scratch
,/astro
and/askapbuffer
for user data. - Include
/software
for people needing to use pre-installed libraries from the software stack (typically MPI libraries). - Include paths containing interconnect and MPI libraries (distinct paths for each cluster) to ensure near-native performance for MPI and CUDA-aware MPI applications.
To ensure that container images are portability, Pawsey provided containers keep host libraries to a minimum. The only case currently supported by Pawsey is mounting of interconnect/MPI libraries, to maximise performance of inter-node communication for MPI and CUDA-aware MPI enabled applications.
Related pages
External links
- Singularity Quick Start
- Dockerfile reference
- For specific details about containerised OpenFOAM tools and usage, refer to the OpenFOAM documentation.