Excerpt |
---|
Singularity is a container platform: it lets you create and run containers that package up pieces of software in a way that is portable and reproducible. With a few basic commands, you can set up a workflow to run on Pawsey systems using Singularity. This page introduces how to get or build a Singularity container image and run that image on HPC systems. |
...
$ module load singularity/34.81.60-nompi
Or, for applications that need mpi.
$ module load singularity/34.81.60-mpi
To avoid errors when downloading and running containers, run the sequence of commands in the following terminal display:
...
To import Docker images from, for example, Docker Hub, you can use the singularity pull
command. As Docker images are written in layers, Singularity pulls the layers instead of just downloading the image, then combines them into a Singularity SIF format container.
$ singularity pull --dir $MYSCRATCH$MYSOFTWARE/singularity/myRepository docker://user/image:tag
...
Unlike Docker containers, Singularity containers can be managed as simple files. We recommend that projects keep their Singularity containers in a small number of specific directories. For example, each user might store all of their own Singularity container .sif
files in a repository directory such as $MYSCRATCH$MYSOFTWARE/singularity/myRepository
. For containers that will be used by several users in the group, we recommend that the repository be maintained as a shared directory, such as /scratch/$PAWSEY_PROJECT/singularity/groupRepository
.
...
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing 2. Modifying a Slurm job to run with Singularity |
---|
linenumbers | true |
---|
| #!/bin/bash -l
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=1840M
#SBATCH --time=00:10:00
#SBATCH --partition=work
# load the singularity module which provides
# the executable and also sets some environment variables
# (In this case, as no mpi is needed:)
module load singularity/34.81.60-nompi
# define some useful environment variables for the container
# this would need to be updated for a users workflow
export myRepository=$MYSCRATCH$MYSOFTWARE/singularity/myRepository
export containerImage=$myRepository/image.sif
# define command to be run. In this example we use ls
export mycommand=ls
# run the container with
srun -N 1 -n 1 -c 1 singularity exec ${containerImage} ${mycommand} |
|
...
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing 3. Enabling a container to use GPUs |
---|
collapse | true |
---|
| #!/bin/bash -l
#SBATCH --job-name=gpu
#SBATCH --gres=gpu:1
#SBATCH --ntasks=1
#SBATCH --time=01:00:00
# Load Singularity
module load singularity/34.81.60-mpi
# Define the container to use
export myRepository=$MYSCRATCH$MYSOFTWARE/singularity/myRepository
export containerImage=$myRepository/gromacs_2018.2.sif
# Run Gromacs preliminary step with container
srun singularity exec --rocm $containerImage gmx grompp -f pme.mdp
# Run Gromacs MD with container
srun singularity exec --rocm $containerImage \
gmx mdrun -ntmpi 1 -nb gpu -pin on -v \
-noconfout -nsteps 5000 -s topol.tpr -ntomp 1 |
|
...
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing 4. Running an MPI application within a Singularity container |
---|
collapse | true |
---|
| #!/bin/bash -l
#SBATCH --account=projectID
#SBATCH --job-name=mpi-OpenFOAMcase
#SBATCH --ntasks=512
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=00:20:00
#SBATCH --partition=work
# ---
# load Singularity
module load singularity/34.81.60-mpi
# ---
# Note we avoid any inadvertent OpenMP threading by setting OMP_NUM_THREADS=1
export OMP_NUM_THREADS=1
# ---
# Set MPI related environment variables for your system (see specific system guides).
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1
# ---
# Define the container to use
export theRepository=<pathToPersonalImages>
export containerImage=$theRepository/openfoam-v1912.sif
# ---
# cd into the case directory
cd $MYSCRATCH/myOpenFOAMCase
# ---
# execute the simpleFoam parallel solver with MPI
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c $OMP_NUM_THREADS \
singularity exec $containerImage simpleFoam -parallel -fileHandler collated | tee log.simpleFoam
|
Notes: srun is the Slurm wrapper for the MPI launcher.-n is the flag to indicate the number of processes to be run (512 in this case, as it is the value of the Slurm variable SLURM_NTASKS ).- Singularity modules at Pawsey have been prepared to automatically set up the right bind mounts of the host MPI paths into the
SINGULARITY_BINDPATH and SINGULARITYENV_LD_LIBRARY_PATH variables, so no further action is needed for bind mounting. -parallel and -fileHandler collated flags are part of OpenFOAM. They are not Singularity or Slurm flags.
|
...