Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The following topics consider the most common situations on which users may need to explicitly invoke singularity (the container engine installed at Pawsey) for using OpenFOAM. If you are not familiar at all with containers, we recommend you to have a look to our specific documentation on that: Containers.

Explicit use of the singularity command and the .sif image

Previously, the only way of using an OpenFOAM containerised tool was to invoke that tool through the use of the singularity command and the name of the image. This procedure can still be used and is recommended for users that bring their own containerised openfoam installation. So, for example, if the user counts with a functional image named myopenfoam-8.sif, they could still load a singularity module with mpi capabilities and then use singularity commands to access/use the containerised solvers. For example, a quick test of the classical pimpleFoam solver:

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 1. Explicit use of the singularity command with user's own container
$ module load singularity/3.8.6<VERSION>-mpi
$ export SINGULARITY_CONTAINER=/PathToTheSingularityImage/myopenfoam-8.sif
$ singularity exec $SINGULARITY_CONTAINER pimpleFoam -help

Usage: pimpleFoam [OPTIONS]
options:
  -case <dir>       specify alternate case directory, default is the cwd
  -fileHandler <handler>
                    override the fileHandler
  -hostRoots <(((host1 dir1) .. (hostN dirN))>
                    slave root directories (per host) for distributed running
  -libs <(lib1 .. libN)>
                    pre-load libraries
...
  -srcDoc           display source code in browser
  -doc              display application documentation in browser
  -help             print the usage

Using: OpenFOAM-8 (see https://openfoam.org)
Build: 8-30b264cc33cd


(Note that the word PathToTheSingularityImage is only a placeholder for the real path to user's image.) Check our documentation about Singularity for further information about the use of containers.


As mentioned before, Pawsey provides OpenFOAM modules that make use of containerised versions of OpenFOAM on Setonix. One advantage of the use of these modules, as explained in the main page of the OpenFOAM documentation, is that the explicit use of the singularity command is not needed anymore. Nevertheless, users can still use the singularity command to access the corresponding images if they prefer. The path and name of the corresponding images is defined by default in the variable SINGULARITY_CONTAINER after loading the module. So, a similar example to the above would be:

...

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 3. Explicit use of the singularity command and the SINGULARITY_CONTAINER variable to query for an environment variable from the container
$ module load openfoam-org-container/8singularity/<VERSION>-mpi
$ export SINGULARITY_CONTAINER=/PathToTheSingularityImage/myopenfoam-8.sif
$ singularity exec $SINGULARITY_CONTAINER printenv | grep FOAM_ETC
FOAM_ETC=/opt/OpenFOAM/OpenFOAM-8/etc
$


(Note that in this example, an imaged owned by the user is being used.)

Or, if the user wishes to open an interactive session within the container:

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 4. Explicit use of the singularity command and the SINGULARITY_CONTAINER variable to open an interactive session
$ module load openfoam-container/v2012
$ singularity shell $SINGULARITY_CONTAINER
Singularity> echo $FOAM_ETC
/opt/OpenFOAM/OpenFOAM-v2012/etc
Singularity>


(Note that in this example, the image provided by the containerised module is being used.)

And, of course, the singularity command can be used within Slurm batch scripts. So, the execution command in the example script for the solver execution in the OpenFOAM: Example Slurm Batch Scripts page can be modified to explicitly use the singularity command:

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 1. Example Slurm batch script to run a solver with 1152 mpi tasks
#!/bin/bash --login
 
#SBATCH --job-name=[name_of_job]
#SBATCH --partition=work
#SBATCH --ntasks=1152
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=[neededTime]

# module load openfoam--org-container/7 Load modules and define images:
# -Using the containerised container:
module load openfoam-org-container/7
# -Using user's own image:
#module load singularity/<VERSION>-mpi  #Adapt <version> to the current provided version of singularity
#SINGULARITY_CONTAINER="/PathToTheSingularityImage/myopenfoam-8.sif"  #Adapt path and name to the correct ones

#--- Specific settings for the cluster you are on
#(Check the specific guide of the cluster for additional settings)

# ---
# Set MPI related environment variables. Not all need to be set
# main variables for multi-node jobs (uncomment for multinode jobs)
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1
#Ask MPI to provide useful runtime information (uncomment if debugging)
#export MPICH_ENV_DISPLAY=1
#export MPICH_MEMORY_REPORT=1


#--- Automating the list of IORANKS for collated fileHandler
echo "Setting the grouping ratio for collated fileHandling"
nProcs=$SLURM_NTASKS #Number of total processors in decomposition for this case
mGroup=32            #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix)
of_ioRanks="0"
iC=$mGroup
while [ $iC -le $nProcs ]; do
   of_ioRanks="$of_ioRanks $iC"
   ((iC += $mGroup))
done
export FOAM_IORANKS="("${of_ioRanks}")"
echo "FOAM_IORANKS=$FOAM_IORANKS"

#-- Execute the solver:
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c $SLURM_CPUS_PER_TASK1 \
     singularity exec $SINGULARITY_CONTAINER pimpleFoam -parallel -parallel


(For the use of their own image, users should comment the line that loads the containerised module and uncomment the lines that load the singularity module and define the SINGULARITY_CONTAINER variable as the real path to their own image. Obviously, the <VERSION> and the real path should be adapted.)

Wrappers of the shell and exec commands

The installed modules provide two additional wrappers that can be used to avoid the explicit call of the singularity command when needing to use the exec or the shell sub-commands (as in the last two examples of the section above). These two wrappers are (depending on the -org flavour or not of openfoam):

...

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 6. Use of the shell wrapper
$ module load openfoam-container/v2012
$ openfoam-shell
Singularity> echo $FOAM_ETC
/opt/OpenFOAM/OpenFOAM-v2012/etc
Singularity>


Wrappers of the solvers and tools for the installed modules

As explained in the main OpenFOAM documentation page, the tools and solvers within the installed modules are directly accessible without the need to explicitly call the singularity command. So, for example, after loading the module for openfoam-container/v2012, the following three commands are equivalent:

...

So, indeed, after loading one of the available containerised modules, the names of the tools/solvers are recognised but are indeed wrappers that invoke both the singularity image (with the singularity command) and the real containerised tool that exists within. So indeed the pimpleFoam in the first line is a wrapper for the full command written in the third line of the example above.

Working with tutorials

Pawsey containers have been installed preserving the tutorials provided by OpenFOAM developers. These tutorials are accessible at the path given by the environmental variable FOAM_TUTORIALS, but this variable exist only inside the container. Therefore its evaluation needs to be interpreted by the container and not by the host. For that, the bash -c command is handy. For example, when channel395 tutorial is the case a user wants to work with, they can find its path inside the container and then make a copy into their working directory in the host:

...

Note
titleAdapt the tutorial to best practices

Before executing a tutorial in Pawsey systems, always adapt the default dictionaries to comply with the OpenFOAM: Best Practices, so you will need to change the writeFormat, purgeWrite and runTimeModifiable variables among others. Also notice that by default, all provided modules at Pawsey with make use of the collated file handler.

Compiling your own tools

OpenFOAM users often have the need to compile their own solvers/tools. With the use of containers there are two routes to follow: 1) Develop and compile additional solvers/tools outside the existing container and 2) Build a new image with the additional tools compiled inside of it.

Both routes have their pros and cons, but we recommend to use the first route for the development phase of the tools/solvers in order to avoid rebuilding of an image for every step on the development. Instead, the additional tools/solvers can be developed on the host and compiled with the OpenFOAM machinery of the container but keeping the source files and executables in the host file system.

We recommend the second route for additional tools/solvers that are not in development anymore and are therefore candidates to exist inside an additional container image.

Developing and compiling outside the container

In a typical OpenFOAM installation, the environmental variable that defines the path where user's own binaries and libraries are to be stored is WM_PROJECT_USER_DIR. But when dealing with the OpenFOAM containers prepared at Pawsey, that variable has been been already defined to a path internal to the container and which can't be modified, as containers own directories are non-writable. Nevertheless, users can still compile their own tools or solvers and store them in a directory in the host filesystem. In order for this to work, we recommend to bind the path in the host where the compiled tools will be saved to the internal path indicated by WM_PROJECT_USER_DIR. In this way, the container will look for the tools in the path indicated by the mentioned variable, but in practise it will be accessing the host directory that has been bound to the internal path.

...

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 2. Example Slurm batch script to run user's own solver with 1152 mpi tasks
#!/bin/bash --login
 
#SBATCH --job-name=[name_of_job]
#SBATCH --partition=work
#SBATCH --ntasks=1152
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=[neededTime]
 
module load openfoam-org-container/8

#--- Specific settings for the cluster you are on
#(Check the specific guide of the cluster for additional settings)

# ---
# Set MPI related environment variables. Not all need to be set
# main variables for multi-node jobs (uncomment for multinode jobs)
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1
#Ask MPI to provide useful runtime information (uncomment if debugging)
#export MPICH_ENV_DISPLAY=1
#export MPICH_MEMORY_REPORT=1


#--- Automating the list of IORANKS for collated fileHandler
echo "Setting the grouping ratio for collated fileHandling"
nProcs=$SLURM_NTASKS #Number of total processors in decomposition for this case
mGroup=32            #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix)
of_ioRanks="0"
iC=$mGroup
while [ $iC -le $nProcs ]; do
   of_ioRanks="$of_ioRanks $iC"
   ((iC += $mGroup))
done
export FOAM_IORANKS="("${of_ioRanks}")"
echo "FOAM_IORANKS=$FOAM_IORANKS"

#-- Defining the binding paths:
wmpudInside=$(singularity exec $SINGULARITY_CONTAINER bash -c 'echo $WM_PROJECT_USER_DIR')
wmpudOutside=$MYSOFTWARE/OpenFOAM/$USER-8

#-- Execute user's own solver:
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c $SLURM_CPUS_PER_TASK1 \
     singularity exec -B $wmpudOutside:$wmpudInside \
     $SINGULARITY_CONTAINER yourSolverFoam -parallel


Building a new image with compiled additional tools/solvers

Basically, users will need to build a new Docker image using a new Dockerfile with the building recipe. This recipe does not need to start from scratch but can start from an existing image with OpenFOAM on it, and then it only needs to copy the source files into the new image and compiles those additional tools/solvers. Currently we do not offer any module of OpenFOAM containers with additional tools, so it is up to users to build their own new container Images and use them as indicated in the following section. Nevertheless, our Git repository counts with examples of some Dockerfiles prepared to build OpenFOAM images that start from an existing tested OpenFOAM image and are then equipped with additional tools available through the tool developer's Git repositories: https://github.com/PawseySC/pawsey-containers/tree/master/OpenFOAM/installationsWithAdditionalTools.

Information in the following sections may be useful if you are looking to build a new container with additional tools.

Use of

...

OpenFOAM versions

...

not provided by Pawsey

Note
titleWe strongly recommend to upgrade your workflow to a recent version of OpenFOAM

First of all, it is important to reiterate that one of the most important best practices of OpenFOAM at Pawsey is to minimise the number of result files. For this, we strongly recommend users to upgrade their workflows to the most recent versions of OpenFOAM, which are capable of reducing the number of result files with the use of the collated fileHandler. The versions provided by Pawsey are capable of using collated fileHandler. Please adhere to the recommended OpenFOAM: Best Practices.

As mentioned in the main documentation for OpenFOAM, several recent versions of OpenFOAM containers are already available in Setonix through modules. One advantage of these modules is that OpenFOAM tools and solvers are accessible through basic commands without the need of calling singularity explicitly in the command line. The call to singularity is performed under the hood by the tool/solver's wrappers.

If users need a new version of OpenFOAM that is still not installed in our systems, please , they can create their own container for that version, or can contact support so that we can make it available to users.

If users are still forced to use an old version of OpenFoam that is not installed in our systems, we recommend to use a container equipped with the needed version.

Pawsey has already built several Docker images with the main intention to serve as examples for users to builid their own images. Pawsey-provided Docker images are available through our registry in Red Hat Quay.io (https://quay.io/pawsey) and the in directories openfoam & openfoam-org. Other directories with the "-legacy" post-fix contain images built with operating-system versions that do not run properly on Setonix anymore. The corresponding building recipes (Dockerfiles) are at Pawsey GitHub repostory (https://github.com/PawseySC/pawsey-containers). As said, users should use these recipes as examples to build their own.

...

  • The image should be equipped with an MPICH ABI compatible MPI and OpenFOAM should be compiled with it.
  • Operating system of the image should be compatible with Setonix operating system (we have used Ubuntu 18.04 & only ubuntu 20.04 successfullyhas proven to be successfull).
  • The image needs to be converted from Docker format into SIngularity format (this is performed automatically by Singularity when the image is pulled).

...

Most
Warning
titleImages that used to work on Magnus previously may not work on Setonix
on Setonix after upgrade to CPE 23.03

Many previous images provided for their use in previous Pawsey Supercomputers (as Magnus) were built FROM the Pawsey-provided base image of MPICH with Ubuntu 16.04 or 18.04. But images based on Ubuntu 16.04 or 18.04 do not run properly on Setonix , and our containers after upgrade to CPE 23.03. Then all our maintained images have been updated and rebuilt FROM base images of MPICH with Ubuntu 1820.04. Users that count with their own rely on images based on Ubuntu 16.04 or 18.04 will need to update and rebuild them pull updated images from Pawsey registry in quay.io and/or rebuild their own FROM base images of MPICH with Ubuntu 18.04 or 20.04.

For building their own images, we recommend users to use their own institution Linux installation equipped with Docker. The installation and use of Docker in your own Linux computer and the building syntax of containers are out of the scope of this documentation, but instructions can be found elsewhere. As mentioned, we strongly recommend the use of one of the recipe examples in the links above, and adapt it for users' own purposes. Once users' Docker image has been built it can be converted into Singularity format and saved on Setonix. In general, the following steps could be followed:

  1. Get access to (or install) Docker within your own institution Linux environment.
  2. Download the building recipe (Dockerfile) that is closer to the version of OpenFOAM that the user wants to build from scratch.
    1. Use Docker to build the image from exactly the same Dockerfile without any modification. This in order to test that "user's own image" built on user's own installation is functional. (Pass to steps 4-7 for testing.)
  3. Modify the recipe as desired and use Docker to build new image with the desired version of OpenFOAM.
  4. Test user's own image with the needed Docker and OpenFOAM commands to check if it is functional at this level.
  5. Convert the recently created Docker image into a Singularity image:
    1. Push the Docker image into user's own online registry within Docker Hub or Quay
    2. Pull the Docker image from the registry using SIngularity (pulling a Docker image with singularity will perform conversion automatically). Pulling can be performed on Setonix (as shown in the example below) or it can also be performed in your own Linux installation if users also count with SIngularity, which can be very handy.
    3. (See also information in our general documentation about pulling or building a container image)
  6. Move the Singularity image to a directory in users' or project's space in /software file system on Setonix.
  7. Test the functionality of the image on Setonix using Singularity commands.

...

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 16. Pulling an external Docker image and creating the local singularity image
$ mkdir -p /software/projects/myProjectName/$MYSOFTWARE/singularity/OpenFOAM
$ cd /software/projects/myProjectName$MYSOFTWARE/singularity/OpenFOAM
$ salloc -n 1 --mem=14800Mb29600Mb -p copy
salloc: Granted job allocation 67529

$ module load singularity/<version>
$ srun singularity pull openfoam-org_2.4.x_pawsey.sif docker://quay.io/pawsey/openfoam-org:2.4.x
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob 11323ed2c653 done
Copying blob 0a71b4dba9db done  
Copying blob 874ae6ae3b00 done  
Copying blob 5d27d6ffcb7e done
...
2022/07/08 10:24:32  info unpack layer: sha256:cf6efca24562cd25028fcc99a4442f6062eeb62a60c7709f1454fc8bd8b0d5ea
2022/07/08 10:24:32  info unpack layer: sha256:072f30ec926604acc07e65868c407e1f0d3a3d98dd46340b04a572e55edcfaa6
2022/07/08 10:24:32  info unpack layer: sha256:9357e5bafecbf1d6af595baf24c54e2a10e182a738133a55c889293dcc7b4d10
2022/07/08 10:24:32  info unpack layer: sha256:7f6dad97028f1961f9b463b213119fea99228cda7f684aeeda213d5848ed125b
...
2022/07/08 10:25:12  info unpack layer: sha256:537ce1385428d12c4ba18c09984440f280cce516bcb65ba3531b4a2a95533cfd
INFO:    Creating SIF file...

$ ls
openfoam-org-2.4.x-pawsey.sif


Note that <version> is just a place holder, but users should change it for one of the available versions in the system. Also, this is just an example but, in practice, users would be pulling images from their own online registry and not from Pawsey's registry. Also note the use of --mem=14800Mb29600Mb (equivalent memory as for 4 8 cores in the copy partition) which is enough for this operation. (In case of an out-of-memory error, requested memory should be increased.) The resulting singularity image is a simple file with the .sif extension. Then, storing and management of the singularity image is performed by treating it as a file. Of course, if you already count with a functional singularity image in another system, you can simply transfer that file to a useful path within your /software directory.

As these images are not provided as modules, there are no wrappers of the tools/solvers to be called without the explicit use of singularity in the command line. Basically, they should be used as any other singularity container to access its own applications.

Pre-processing/Post-processing with non-installed versions

A lot of pre-processing and post-processing tools of openfoam are not parallel, so they need to be ran as single task jobs. Nevertheless it is important to request for the right amount of memory needed to manage the size of the simulation:

...

Here, the tools blockMesh, setFields, decomposePar are just classical examples of the many tools available in OpenFOAM.
As indicated in the script, users need to request for the right amount of memory needed which may be larger than the default for single task jobs (1790Mb). For example, if the mesh is large and needs 64 Gigabytes of memory, then use:

#SBATCH --mem=64G


Solver execution with non-installed versions

Solver execution uses a "classical" MPI slurm job script:

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 4. Example Slurm batch script to run a solver with 1152 mpi tasks
#!/bin/bash --login
 
#SBATCH --job-name=[name_of_job]
#SBATCH --partition=work
#SBATCH --ntasks=1152
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=[neededTime]

#-- Loading modules
module load singularity/<version>

#-- Defining the singularity image to use
export SINGULARITY_CONTAINER=path/To/The/Image/imageName.sif

#--- Specific settings for the cluster you are on
#(Check the specific guide of the cluster for additional settings)

# ---
# Set MPI related environment variables. Not all need to be set
# main variables for multi-node jobs (uncomment for multinode jobs)
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1
#Ask MPI to provide useful runtime information (uncomment if debugging)
#export MPICH_ENV_DISPLAY=1
#export MPICH_MEMORY_REPORT=1

#-- Execute the solver:
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c $SLURM_CPUS_PER_TASK1 \
     singularity exec $SINGULARITY_CONTAINER pimpleFoam -parallel


Here, the solver pimpleFoam is just a classical example of the many solvers available in OpenFOAM.  As indicated in the script, the request is for exclusive access to the node and, therefore, no specific memory request is indicated as all the memory in each node will be available to the job.

Related pages