Excerpt |
---|
Singularity is a container platform: it lets you create and run containers that package up pieces of software in a way that is portable and reproducible. With a few basic commands, you can set up a workflow to run on Pawsey systems using Singularity. This page introduces how to get or build a Singularity container image and run that image on HPC systems. |
...
Singularity on Pawsey Systems
...
Warning | ||
---|---|---|
| ||
Due to a problem with MPI library binding with Singularity containers on Setonix, containerised MPI applications are currently not able to execute multinode jobs. But they still work fine for single node jobs. We are currently working on solving this problem. |
Depending on the cluster, up to three distinct different Singularity modules may be available:
Cluster |
| singularity/VV-mpi | singularity/VV-nompi |
|
|
---|---|---|---|---|---|
Setonix (HPE Cray Ex) | X | X | |||
Topaz (GPU) | X | X | X | ||
Garrawarla (GPU) | X | X | X |
These modules differ on the flavour of the MPI library they bind mount in the containers at runtime, and on whether or not they also bind mount the required libraries for CUDA-aware MPI:
singularity, singularity/VV-mpi
: Cray MPI (Setonix) or Intel MPI (others). All ABI compatible with MPICHsingularity/VV-nompi:
For applications that do not require mpi communications (commonly Bioinformatics applications)singularity-openmpi
: OpenMPIsingularity-openmpi-gpu
: OpenMPI built with CUDA support and any other libraries required by CUDA-aware MPI (for example:gdrcopy
)
...
To ensure that container images are portability, Pawesy Pawsey provided containers keep host libraries to a minimum. The only case currently supported by Pawsey is mounting of interconnect/MPI libraries, to maximise performance of inter-node communication for MPI and CUDA-aware MPI enabled applications.
...