Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The following topics consider the most common situations on which users may need to explicitly invoke singularity (the container engine installed at Pawsey) for using OpenFOAM. If you are not familiar at all with containers, we recommend you to have a look to our specific documentation on that: Containers.

Explicit use of the singularity command and the .sif image

Previously, the only way of using an OpenFOAM containerised tool was to invoke that tool through the use of the singularity command and the name of the image. This procedure can still be used and is recommended for users that bring their own containerised openfoam installation. So, for example, if the user counts with a functional image named myopenfoam-8.sif, they could still load a singularity module with mpi capabilities and then use singularity commands to access/use the containerised solvers. For example, a quick test of the classical pimpleFoam solver:

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 1. Explicit use of the singularity command with user's own container
$ module load singularity/34.111.40-mpi
$ export SINGULARITY_CONTAINER=/PathToTheSingularityImage/myopenfoam-8.sif
$ singularity exec $SINGULARITY_CONTAINER pimpleFoam -help

Usage: pimpleFoam [OPTIONS]
options:
  -case <dir>       specify alternate case directory, default is the cwd
  -fileHandler <handler>
                    override the fileHandler
  -hostRoots <(((host1 dir1) .. (hostN dirN))>
                    slave root directories (per host) for distributed running
  -libs <(lib1 .. libN)>
                    pre-load libraries
...
  -srcDoc           display source code in browser
  -doc              display application documentation in browser
  -help             print the usage

Using: OpenFOAM-8 (see https://openfoam.org)
Build: 8-30b264cc33cd


...

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 1. Example Slurm batch script to run a solver with 1152 mpi tasks
#!/bin/bash --login
 
#SBATCH --job-name=[name_of_job]
#SBATCH --partition=work
#SBATCH --ntasks=1152
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=[neededTime]
 
module load openfoam-org-container/7

#--- Specific settings for the cluster you are on
#(Check the specific guide of the cluster for additional settings)

# ---
# Set MPI related environment variables. Not all need to be set
# main variables for multi-node jobs (uncomment for multinode jobs)
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1
#Ask MPI to provide useful runtime information (uncomment if debugging)
#export MPICH_ENV_DISPLAY=1
#export MPICH_MEMORY_REPORT=1


#--- Automating the list of IORANKS for collated fileHandler
echo "Setting the grouping ratio for collated fileHandling"
nProcs=$SLURM_NTASKS #Number of total processors in decomposition for this case
mGroup=32            #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix)
of_ioRanks="0"
iC=$mGroup
while [ $iC -le $nProcs ]; do
   of_ioRanks="$of_ioRanks $iC"
   ((iC += $mGroup))
done
export FOAM_IORANKS="("${of_ioRanks}")"
echo "FOAM_IORANKS=$FOAM_IORANKS"

#-- Execute the solver:
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c 1 \
     singularity exec $SINGULARITY_CONTAINER pimpleFoam -parallel


Wrappers of the shell and exec commands

The installed modules provide two additional wrappers that can be used to avoid the explicit call of the singularity command when needing to use the exec or the shell sub-commands (as in the last two examples of the section above). These two wrappers are (depending on the -org flavour or not of openfoam):

...

Column
width900px


Code Block
languagebash
themeDJango
titleTerminal 6. Use of the shell wrapper
$ module load openfoam-container/v2012
$ openfoam-shell
Singularity> echo $FOAM_ETC
/opt/OpenFOAM/OpenFOAM-v2012/etc
Singularity>


Wrappers of the solvers and tools for the installed modules

As explained in the main OpenFOAM documentation page, the tools and solvers within the installed modules are directly accessible without the need to explicitly call the singularity command. So, for example, after loading the module for openfoam-container/v2012, the following three commands are equivalent:

...

So, indeed, after loading one of the available containerised modules, the names of the tools/solvers are recognised but are indeed wrappers that invoke both the singularity image (with the singularity command) and the real containerised tool that exists within. So indeed the pimpleFoam in the first line is a wrapper for the full command written in the third line of the example above.

Working with tutorials

Pawsey containers have been installed preserving the tutorials provided by OpenFOAM developers. These tutorials are accessible at the path given by the environmental variable FOAM_TUTORIALS, but this variable exist only inside the container. Therefore its evaluation needs to be interpreted by the container and not by the host. For that, the bash -c command is handy. For example, when channel395 tutorial is the case a user wants to work with, they can find its path inside the container and then make a copy into their working directory in the host:

...

Note
titleAdapt the tutorial to best practices

Before executing a tutorial in Pawsey systems, always adapt the default dictionaries to comply with the OpenFOAM: Best Practices, so you will need to change the writeFormat, purgeWrite and runTimeModifiable variables among others. Also notice that by default, all provided modules at Pawsey with make use of the collated file handler.

Compiling your own tools

OpenFOAM users often have the need to compile their own solvers/tools. With the use of containers there are two routes to follow: 1) Develop and compile additional solvers/tools outside the existing container and 2) Build a new image with the additional tools compiled inside of it.

Both routes have their pros and cons, but we recommend to use the first route for the development phase of the tools/solvers in order to avoid rebuilding of an image for every step on the development. Instead, the additional tools/solvers can be developed on the host and compiled with the OpenFOAM machinery of the container but keeping the source files and executables in the host file system.

We recommend the second route for additional tools/solvers that are not in development anymore and are therefore candidates to exist inside an additional container image.

Developing and compiling outside the container

In a typical OpenFOAM installation, the environmental variable that defines the path where user's own binaries and libraries are to be stored is WM_PROJECT_USER_DIR. But when dealing with the OpenFOAM containers prepared at Pawsey, that variable has been been already defined to a path internal to the container and which can't be modified, as containers own directories are non-writable. Nevertheless, users can still compile their own tools or solvers and store them in a directory in the host filesystem. In order for this to work, we recommend to bind the path in the host where the compiled tools will be saved to the internal path indicated by WM_PROJECT_USER_DIR. In this way, the container will look for the tools in the path indicated by the mentioned variable, but in practise it will be accessing the host directory that has been bound to the internal path.

...

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 2. Example Slurm batch script to run user's own solver with 1152 mpi tasks
#!/bin/bash --login
 
#SBATCH --job-name=[name_of_job]
#SBATCH --partition=work
#SBATCH --ntasks=1152
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=[neededTime]
 
module load openfoam-org-container/8

#--- Specific settings for the cluster you are on
#(Check the specific guide of the cluster for additional settings)

# ---
# Set MPI related environment variables. Not all need to be set
# main variables for multi-node jobs (uncomment for multinode jobs)
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1
#Ask MPI to provide useful runtime information (uncomment if debugging)
#export MPICH_ENV_DISPLAY=1
#export MPICH_MEMORY_REPORT=1


#--- Automating the list of IORANKS for collated fileHandler
echo "Setting the grouping ratio for collated fileHandling"
nProcs=$SLURM_NTASKS #Number of total processors in decomposition for this case
mGroup=32            #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix)
of_ioRanks="0"
iC=$mGroup
while [ $iC -le $nProcs ]; do
   of_ioRanks="$of_ioRanks $iC"
   ((iC += $mGroup))
done
export FOAM_IORANKS="("${of_ioRanks}")"
echo "FOAM_IORANKS=$FOAM_IORANKS"

#-- Defining the binding paths:
wmpudInside=$(singularity exec $SINGULARITY_CONTAINER bash -c 'echo $WM_PROJECT_USER_DIR')
wmpudOutside=$MYSOFTWARE/OpenFOAM/$USER-8

#-- Execute user's own solver:
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c 1 \
     singularity exec -B $wmpudOutside:$wmpudInside \
     $SINGULARITY_CONTAINER yourSolverFoam -parallel


Building a new image with compiled additional tools/solvers

Basically, users will need to build a new Docker image using a new Dockerfile with the building recipe. This recipe does not need to start from scratch but can start from an existing image with OpenFOAM on it, and then it only needs to copy the source files into the new image and compiles those additional tools/solvers. Currently we do not offer any module of OpenFOAM containers with additional tools, so it is up to users to build their own new container Images and use them as indicated in the following section. Nevertheless, our Git repository counts with examples of some Dockerfiles prepared to build OpenFOAM images that start from an existing tested OpenFOAM image and are then equipped with additional tools available through the tool developer's Git repositories: https://github.com/PawseySC/pawsey-containers/tree/master/OpenFOAM/installationsWithAdditionalTools.

Information in the following sections may be useful if you are looking to build a new container with additional tools.

Use of old versions of OpenFOAM not provided by Pawsey

Note
titleWe strongly recommend to upgrade your workflow to a recent version of OpenFOAM

First of all, it is important to reiterate that one of the most important best practices of OpenFOAM at Pawsey is to minimise the number of result files. For this, we strongly recommend users to upgrade their workflows to the most recent versions of OpenFOAM, which are capable of reducing the number of result files with the use of the collated fileHandler. The versions provided by Pawsey are capable of using collated fileHandler. Please adhere to the recommended OpenFOAM: Best Practices.

...

As these images are not provided as modules, there are no wrappers of the tools/solvers to be called without the explicit use of singularity in the command line. Basically, they should be used as any other singularity container to access its own applications.

Pre-processing/Post-processing with non-installed versions

A lot of pre-processing and post-processing tools of openfoam are not parallel, so they need to be ran as single task jobs. Nevertheless it is important to request for the right amount of memory needed to manage the size of the simulation:

...

Here, the tools blockMesh, setFields, decomposePar are just classical examples of the many tools available in OpenFOAM.
As indicated in the script, users need to request for the right amount of memory needed which may be larger than the default for single task jobs (1790Mb). For example, if the mesh is large and needs 64 Gigabytes of memory, then use:

#SBATCH --mem=64G


Solver execution with non-installed versions

Solver execution uses a "classical" MPI slurm job script:

...

Here, the solver pimpleFoam is just a classical example of the many solvers available in OpenFOAM.  As indicated in the script, the request is for exclusive access to the node and, therefore, no specific memory request is indicated as all the memory in each node will be available to the job.

Related pages

...