Using Containers

This page:

Workflow


One of the benefits of working with a container is that it can simplify software installation and deployment. With our HPC systems we recommend the use of Singularity due to HPC-friendly usage, thus also extend the recommendation for usage on Nimbus. If you are familiar with Docker, you may also use existing Docker images with Singularity.

Singularity and Docker are pre-installed on all Nimbus images with the "- Pawsey" suffix. If you have selected such image for your instance, you can skip step 1. Click here for more information on Nimbus images.

The workflow for Singularity is as follows:

  1. Download and install Singularity 
  2. Build a Singularity container by:
    1. Pulling an existing container from Docker Hub or Container Library, 
    2. Building a container image (Docker is recommended)
  3. Run the image using Singularity

Setup


Installation

Install Singularity on Nimbus (v 3.5.3) with the following instructions:

Install system dependencies:

>sudo apt-get update && \
  sudo apt-get install -y build-essential \
  libseccomp-dev pkg-config squashfs-tools cryptsetup

Install latest Golang (v1.14.2 at time of writing):

>export VERSION=1.14.2 OS=linux ARCH=amd64
>wget -O /tmp/go${VERSION}.${OS}-${ARCH}.tar.gz https://dl.google.com/go/go${VERSION}.${OS}-${ARCH}.tar.gz && \
  sudo tar -C /usr/local -xzf /tmp/go${VERSION}.${OS}-${ARCH}.tar.gz

Set up the environment for Go:

>echo 'export GOPATH=${HOME}/go' >> ~/.bashrc &&   echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc &&   source ~/.bashrc

Install golangci-lint:

>curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh |   sh -s -- -b $(go env GOPATH)/bin v1.15.0

Download the Singularity version 3.5.3 release:

>mkdir -p ${GOPATH}/src/github.com/sylabs && \
  cd ${GOPATH}/src/github.com/sylabs && \
  git clone https://github.com/sylabs/singularity.git && \
  cd singularity
>git checkout v3.5.3

Then compile it:

>cd ${GOPATH}/src/github.com/sylabs/singularity && \
  ./mconfig && \
  cd ./builddir && \
  make && \
  sudo make install

Check your installation:

> singularity version
3.5.3

You may like to add the following line to your ~/.bashrc file for bash completions to work with Singularity commands every time you open a new shell:

>. /usr/local/etc/bash_completion.d/singularity

Pull an existing image

The following commands will enable pulling of existing images from various container repositories, as well as building from existing containers or building from a definition file. For a more in-depth tutorial on how to perform these commands, you can visit our tutorial page here: https://pawseysc.github.io/sc19-containers/ or the Singularity user guide here: https://sylabs.io/guides/3.5/user-guide/quick_start.html#download-pre-built-images.

The default directory location for the Singularity image cache is $HOME/.singularity/cache. This can be changed to your /data directory, as your /home directory would usually have a much smaller volume. You can redefine the path to the cache dir by setting the variable SINGULARITY_CACHEDIR. E.g.:

>sudo mkdir /data/singularity_cache
>export SINGULARITY_CACHEDIR=/data/singularity_cache

To import Docker images from e.g. Docker Hub, you can use the singularity pull command. As Docker images are written in layers, Singularity pulls the layers instead of just downloading the image, then combines them into a Singularity SIF format container.

>sudo mkdir sif_lib
>sudo singularity pull --dir sif_lib docker://library/image:tag
  • The --dir flag specifies the image to be downloaded to a location, e.g. sif_lib in this case
  • docker:// indicates that you're pulling from the Docker Hub library 
  • library is the hub user 
  • image is the image or repository name you're pulling
  • tag is the Docker Hub tag that identifies which image to pull

Build a container image

If you need to build your own container image, we recommend you do so using Docker. There are some good reasons for this choice:

  • Compatibility, portability and shareability: Docker images can be run by any container runtime, Singularity images can only be run by Singularity
  • Ease of development: layer caching in Docker may significantly speed up the process of performing repeated image builds; in addition, Docker allows writing in containers by default, allowing for tests on-the-fly
  • Community adoption: community experience and know-how in writing good image recipes focuses on Docker and Dockerfiles

Information on the Dockerfile syntax can be found at Dockerfile reference.

Once you've written a Dockerfile, you can use it to build a container image with Docker via

> sudo docker build -t image:tag .

Then, you can convert the Docker image into the Singularity SIF format using:

> singularity pull image_tag.sif docker-daemon:image:tag

Working with Images


To see all of the images in your local repository on Pawsey you can use the singularity cache list command:

>sudo singularity cache list

NAME                     DATE CREATED           SIZE             TYPE
ubuntu_latest.sif        2019-10-21 13:19:50    28.11 MB         library
ubuntu_18.04.sif         2019-10-21 13:19:04    37.10 MB         library
ubuntu_18.04.sif         2019-10-21 13:19:40    25.89 MB         oci

There 3 containers using: 91.10 MB, 6 oci blob file(s) using 26.73 MB of space.
Total space used: 117.83 MB

If you are running out of disk space, you can clean it up, e.g. to wipe everything use the -a flag (use -f instead from Singularity version 3.4 on):

>sudo singularity cache clean -a

Running Singularity


To run Singularity, there are two main commands that you will use.

>sudo singularity run sif_lib/your-container.sif 
  • This command assumes that your container has a runscript (or run functions) that will be executed on running the container as above.

>sudo singularity exec sif_lib/your-container.sif <command>
  • This command allows you to execute a specific command you want to container to run. Replace <command> accordingly.

Bind mounting host directories


With Singularity you are able to mount your host directory to the container. For example, you may want to mount a directory where you would like to read and write files in from the container. The syntax required to do so is:

>sudo singularity exec -B /path/to/host/directory:/path/in/container sif_lib/your-container.sif <command>
  • singularity exec allows you to execute the container with a specific command that is placed at the end of the string
  • -B is the flag for bind mounting the directory to the container
  • Note: you can also remove ":/path/in/container" if you just need to bind a path/directory to the main directory of the container.

Containers with Python and R


A note for Singularity containers that have Python or R built in, the flag -e is required to run the container with an isolated shell environment. If you require reading and/or writing from a local directory, you may use the -e flag in conjunction with the -B flag:

>sudo singularity exec -e sif_lib/rstudio_latest.sif R
>sudo singularity exec -B /path/to/host/directory:/path/in/container -e sif_lib/rstudio_latest.sif
  • sif_lib is the directory you pulled your container to.
  • rstudio_latest.sif is a container that was pulled as such: sudo singularity pull docker://rocker/rstudio
  • R is the command that you are asking the container to execute, which in this case opens up R from within the container.
  • Note: you can also remove ":/path/in/container" if you just need to bind a path/directory to the main directory of the container.

Using GPUs


Singularity allows users to make use of GPUs within their containers, by adding the runtime flag --nv. Here's an example of running Gromacs, a popular molecular dynamics package, among the ones that have been optimised to run on GPUs through Nvidia containers:

export SINGULARITY_CACHEDIR=/data/singularity_cache

# Run Gromacs preliminary step with container
>sudo singularity exec --nv $SIFPATH/gromacs_2018.2.sif \
    					   gmx grompp -f pme.mdp

# Run Gromacs MD with container
>sudo singularity exec --nv $SIFPATH/gromacs_2018.2.sif \
    					   gmx mdrun -ntmpi 1 -nb gpu -pin on -v \
                           -noconfout -nsteps 5000 -s topol.tpr -ntomp 1

Build Tips & Tricks


In general, here are some recommended best practices:

  • Minimize image size
    • Each distinct instruction (such as RUN) in the Dockerfile generates another layer in the container, increasing its size
    • To minimize image size, use multi-line commands, and clean up package manager caches:

      RUN apt-get update \
      	&& apt-get install -y \
             autoconf \
             automake \
             gcc \
             g++ \
             python \
             python-dev \
          && apt-get clean all \
          && rm -rf /var/lib/apt/lists/*
  • Software bloat

    • Only install the software you need for a given application into a container
  • Modularity
    • Creating giant, monolithic containers with every possible application you could need is bad practice.  It increases image size, reduces performance, and increases complexity.  Containers should only contain a few (ideally only 1) applications that you'll use. You can chain together workflows that use multiple containers, meaning if you need to change a particular portion you only need to update a single, small container.

Common issues


Some users have reported having issues with loop devices when using Singularity for multiple containers, e.g.:

FATAL:   container creation failed: mount /proc/self/fd/3->/usr/local/var/singularity/mnt/session/rootfs error: while mounting image /proc/self/fd/3: failed to find loop device: could not attach image file to loop device: no loop devices available

To resolve this, you can run the following to increase the number of loop devices to 256, which is the default for Singularity:

#Add max_loop=256 to the GRUB_CMDLINE_LINUX value in /etc/default/grub

$ sudo sed -i 's/GRUB_CMDLINE_LINUX=/GRUB_CMDLINE_LINUX="max_loop=256"/g' /etc/default/grub
$ sudo update-grub2 
$ sudo shutdown -r now

#SSH back into the instance and verify max_loop=256 is present

$ cat /proc/cmdline 
BOOT_IMAGE=/boot/vmlinuz-5.15.0-78-generic root=UUID=7226c063-0abb-4b72-a414-b7395a2d5e26 ro max_loop=256 console=tty1 console=ttyS0


Additional Documentation