The following topics consider the most common situations on which users may need to explicitly invoke singularity (
the container engine installed at Pawsey) for using OpenFOAM. If you are not familiar at all with containers, we recommend you to have a look to our specific documentation on that: Containers.
Explicit use of the singularity command and the .sif image
Previously, the only way of using an OpenFOAM containerised tool was to invoke that tool through the use of the singularity
command and the name of the image. This procedure can still be used and is recommended for users that bring their own containerised openfoam installation. So, for example, if the user counts with a functional image named myopenfoam-8.sif
, they could still load a singularity module with mpi capabilities and then use singularity commands to access/use the containerised solvers. For example, a quick test of the classical pimpleFoam
solver:
$ module load singularity/4.1.0-mpi $ export SINGULARITY_CONTAINER=/PathToTheSingularityImage/myopenfoam-8.sif $ singularity exec $SINGULARITY_CONTAINER pimpleFoam -help Usage: pimpleFoam [OPTIONS] options: -case <dir> specify alternate case directory, default is the cwd -fileHandler <handler> override the fileHandler -hostRoots <(((host1 dir1) .. (hostN dirN))> slave root directories (per host) for distributed running -libs <(lib1 .. libN)> pre-load libraries ... -srcDoc display source code in browser -doc display application documentation in browser -help print the usage Using: OpenFOAM-8 (see https://openfoam.org) Build: 8-30b264cc33cd
(Note that the word PathToTheSingularityImage is only a placeholder for the real path to user's image.)
As mentioned before, Pawsey provides OpenFOAM modules that make use of containerised versions of OpenFOAM on Setonix. One advantage of the use of these modules, as explained in the main page of the OpenFOAM documentation, is that the explicit use of the singularity command is not needed anymore. Nevertheless, users can still use the singularity
command to access the corresponding images if they prefer. The path and name of the corresponding images is defined by default in the variable SINGULARITY_CONTAINER
after loading the module. So, a similar example to the above would be:
$ module load openfoam-org-container/7 $ echo $SINGULARITY_CONTAINER /software/setonix/2022.05/containers/sif/quay.io/pawsey/openfoam-org/7/quay.io-pawsey-openfoam-org-7-sha256:3d427b3dec890193bb671185acefdc91fb126363b5f368d147603002b4708afe.sif $ singularity exec $SINGULARITY_CONTAINER pimpleFoam -help Usage: pimpleFoam [OPTIONS] options: -case <dir> specify alternate case directory, default is the cwd -fileHandler <handler> override the fileHandler -hostRoots <(((host1 dir1) .. (hostN dirN))> slave root directories (per host) for distributed running -libs <(lib1 .. libN)> pre-load libraries ... -srcDoc display source code in browser -doc display application documentation in browser -help print the usage Using: OpenFOAM-7 (see https://openfoam.org) Build: 7-63349425784a
Again, it's worth to remember that the singularity command syntax of the example above is not necessary when using the containerised-OpenFOAM modules offered at Pawsey. For the containerised modules offered by Pawsey, the names of the openfoam commands/solvers are indeed wrappers that call the containerised tools as if they were bare metal installations. Therefore, the simple use of pimpleFoam -help
would have been enough for the example above instead of the full singularity syntax, as explained in the main page of the OpenFOAM documentation. Also note that, when using the containerised modules, there is no need to explicitly load singularity as it is loaded by default together with the OpenFOAM module.
Nevertheless, there are some cases for which users may prefer to explicitly use the singularity
command. For example, to query the content of an environment variable defined within the container (like FOAM_ETC
) they can use:
$ module load openfoam-org-container/8 $ singularity exec $SINGULARITY_CONTAINER printenv | grep FOAM_ETC FOAM_ETC=/opt/OpenFOAM/OpenFOAM-8/etc $
Or, if the user wishes to open an interactive session within the container:
$ module load openfoam-container/v2012 $ singularity shell $SINGULARITY_CONTAINER Singularity> echo $FOAM_ETC /opt/OpenFOAM/OpenFOAM-v2012/etc Singularity>
And, of course, the singularity
command can be used within Slurm batch scripts. So, the execution command in the example script for the solver execution in the OpenFOAM: Example Slurm Batch Scripts page can be modified to explicitly use the singularity
command:
#!/bin/bash --login #SBATCH --job-name=[name_of_job] #SBATCH --partition=work #SBATCH --ntasks=1152 #SBATCH --ntasks-per-node=128 #SBATCH --cpus-per-task=1 #SBATCH --exclusive #SBATCH --time=[neededTime] module load openfoam-org-container/7 #--- Specific settings for the cluster you are on #(Check the specific guide of the cluster for additional settings) # --- # Set MPI related environment variables. Not all need to be set # main variables for multi-node jobs (uncomment for multinode jobs) export MPICH_OFI_STARTUP_CONNECT=1 export MPICH_OFI_VERBOSE=1 #Ask MPI to provide useful runtime information (uncomment if debugging) #export MPICH_ENV_DISPLAY=1 #export MPICH_MEMORY_REPORT=1 #--- Automating the list of IORANKS for collated fileHandler echo "Setting the grouping ratio for collated fileHandling" nProcs=$SLURM_NTASKS #Number of total processors in decomposition for this case mGroup=32 #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix) of_ioRanks="0" iC=$mGroup while [ $iC -le $nProcs ]; do of_ioRanks="$of_ioRanks $iC" ((iC += $mGroup)) done export FOAM_IORANKS="("${of_ioRanks}")" echo "FOAM_IORANKS=$FOAM_IORANKS" #-- Execute the solver: srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c 1 \ singularity exec $SINGULARITY_CONTAINER pimpleFoam -parallel
Wrappers of the shell and exec commands
The installed modules provide two additional wrappers that can be used to avoid the explicit call of the singularity
command when needing to use the exec
or the shell
sub-commands (as in the last two examples of the section above). These two wrappers are (depending on the -org
flavour or not of openfoam):
openfoam-exec
or openfoam-org-exec
and
openfoam-shell
or openfoam-org-shell
So, the last two examples of the section above can be achieved with the use of these wrappers as:
$ module load openfoam-org-container/8 $ openfoam-org-exec printenv | grep FOAM_ETC FOAM_ETC=/opt/OpenFOAM/OpenFOAM-8/etc $
and:
$ module load openfoam-container/v2012 $ openfoam-shell Singularity> echo $FOAM_ETC /opt/OpenFOAM/OpenFOAM-v2012/etc Singularity>
Wrappers of the solvers and tools for the installed modules
As explained in the main OpenFOAM documentation page, the tools and solvers within the installed modules are directly accessible without the need to explicitly call the singularity command. So, for example, after loading the module for openfoam-container/v2012
, the following three commands are equivalent:
pimpleFoam -help
or
openfoam-exec pimpleFoam -help
or
singularity exec $SINGULARITY_CONTAINER pimpleFoam -help
(If the flavour of the loaded module was of the -org
type, then the second command would be openfoam-org-exec pimpleFoam -help
).
So, indeed, after loading one of the available containerised modules, the names of the tools/solvers are recognised but are indeed wrappers that invoke both the singularity image (with the singularity command) and the real containerised tool that exists within. So indeed the pimpleFoam
in the first line is a wrapper for the full command written in the third line of the example above.
Working with tutorials
Pawsey containers have been installed preserving the tutorials provided by OpenFOAM developers. These tutorials are accessible at the path given by the environmental variable FOAM_TUTORIALS, but this variable exist only inside the container. Therefore its evaluation needs to be interpreted by the container and not by the host. For that, the bash -c
command is handy. For example, when channel395
tutorial is the case a user wants to work with, they can find its path inside the container and then make a copy into their working directory in the host:
$ module load openfoam-org-container/8 $ openfoam-org-exec bash -c 'find $FOAM_TUTORIALS -iname "*channel*"' /opt/OpenFOAM/OpenFOAM-8/tutorials/compressible/rhoPimpleFoam/laminar/blockedChannel /opt/OpenFOAM/OpenFOAM-8/tutorials/incompressible/pimpleFoam/LES/channel395 /opt/OpenFOAM/OpenFOAM-8/tutorials/incompressible/pimpleFoam/LES/channel395/system/postChannelDict /opt/OpenFOAM/OpenFOAM-8/tutorials/incompressible/pimpleFoam/laminar/blockedChannel /opt/OpenFOAM/OpenFOAM-8/tutorials/lagrangian/MPPICFoam/injectionChannel /opt/OpenFOAM/OpenFOAM-8/tutorials/lagrangian/reactingParcelFoam/verticalChannel /opt/OpenFOAM/OpenFOAM-8/tutorials/lagrangian/reactingParcelFoam/verticalChannelLTS /opt/OpenFOAM/OpenFOAM-8/tutorials/lagrangian/simpleReactingParcelFoam/verticalChannel /opt/OpenFOAM/OpenFOAM-8/tutorials/multiphase/interFoam/RAS/waterChannel $ openfoam-org-exec cp -r /opt/OpenFOAM/OpenFOAM-8/tutorials/incompressible/pimpleFoam/LES/channel395 . $ ls channel395
Or users can start an interactive session to search and copy the required tutorial:
$ module load openfoam-org-container/8 $ openfoam-org-shell Singularity> HOSTDIR=$PWD Singularity> cd $FOAM_TUTORIALS Singularity> cd incompressible/pimpleFoam/LES/ Singularity> ls channel395 Singularity> cp -r channel395/ $HOSTDIR Singularity> ls $HOSTDIR channel395 Singularity> exit $ ls channel395
Adapt the tutorial to best practices
Before executing a tutorial in Pawsey systems, always adapt the default dictionaries to comply with the OpenFOAM: Best Practices, so you will need to change the writeFormat
, purgeWrite
and runTimeModifiable
variables among others. Also notice that by default, all provided modules at Pawsey with make use of the collated
file handler.
Compiling your own tools
OpenFOAM users often have the need to compile their own solvers/tools. With the use of containers there are two routes to follow: 1) Develop and compile additional solvers/tools outside the existing container and 2) Build a new image with the additional tools compiled inside of it.
Both routes have their pros and cons, but we recommend to use the first route for the development phase of the tools/solvers in order to avoid rebuilding of an image for every step on the development. Instead, the additional tools/solvers can be developed on the host and compiled with the OpenFOAM machinery of the container but keeping the source files and executables in the host file system.
We recommend the second route for additional tools/solvers that are not in development anymore and are therefore candidates to exist inside an additional container image.
Developing and compiling outside the container
In a typical OpenFOAM installation, the environmental variable that defines the path where user's own binaries and libraries are to be stored is WM_PROJECT_USER_DIR
. But when dealing with the OpenFOAM containers prepared at Pawsey, that variable has been been already defined to a path internal to the container and which can't be modified, as containers own directories are non-writable. Nevertheless, users can still compile their own tools or solvers and store them in a directory in the host filesystem. In order for this to work, we recommend to bind the path in the host where the compiled tools will be saved to the internal path indicated by WM_PROJECT_USER_DIR
. In this way, the container will look for the tools in the path indicated by the mentioned variable, but in practise it will be accessing the host directory that has been bound to the internal path.
1-. The first step to complete this procedure is to know the value of WM_PROJECT_USER_DIR
inside the container, for that we can do:
$ module load openfoam-org-container/8 $ singularity exec $SINGULARITY_CONTAINER bash -c 'echo $WM_PROJECT_USER_DIR' /home/ofuser/OpenFOAM/ofuser-8
(Here, a specific flavour/version of OpenFOAM module is being used as example, but the procedure applies to any other container. Note that for our containerised modules, the singularity image is accessible through the variable SINGULARITY_CONTAINER
, but for other images you may need to call them explicitly.)
2-. Save the path of the WM_PROJECT_USER_DIR
internal variable into an auxiliary variable, which will be used later in the binding step of the procedure:
$ wmpudInside=$(singularity exec $SINGULARITY_CONTAINER bash -c 'echo $WM_PROJECT_USER_DIR') $ echo $wmpudInside /home/ofuser/OpenFOAM/ofuser-8
3-. Have a directory in the host where are you going to save/develop your own tools/solvers. Then put your source files into that directory:
$ mkdir -p $MYSOFTWARE/OpenFOAM/$USER-8/src/applications/solvers $ cd $MYSOFTWARE/OpenFOAM/${USER}-8/src/applications/solvers $ git clone https://github.com/GitUser/GitRepo.git $ cd GitRepo
Explaining:
- First line creates a directory in the host making use of the
MYSOFTWARE
andUSER
environment variables. - Then we
cd
into that directory. - Next command is to
clone
a Git repository where the source files of user's own tools/solvers reside to have them accessible in the host. (Note that the names used here are placeholders for the explanation, so users will need to use the correct <GitUser>
and <GitRepo
> for their own purposes.) - And finally
cd
into the repository directory.
We indeed recommend to keep the development of OpenFOAM tools/solvers in a git repository (and perform the cloning as explained here). But users can always copy their source files into the host directory by any other means.
4-. Use another auxiliary variable to store the path in the host that will play the role of WM_PROJECT_USER_DIR
. Note that the path in this step can be completely unrelated to the previous step, although here we keep them closely related and use the "main" part of the path above to define the place where OpenFOAM will use to store the compiled solvers/tools.
$ wmpudOutside=$MYSOFTWARE/OpenFOAM/$USER-8 $ echo $wmpudOutside /software/projects/projectName1234/userName/OpenFOAM/userName-8
(Note again that <projectName1234>
and <userName>
are just placeholders for the expected output of the example. Also note that for this example we are using openfoam-container/8
, so the number "8" is related to the version of the example and should be changed for the version of use in the real case.)
5-. From the directory where your source files reside, compile your tools using common "OpenFOAM commands" using the singularity containers. Note that it is always required to bind the host path into the internal path to which the variable WM_PROJECT_USER_DIR
is internally defined. Note that the binding is performed with the -B
option of the singularity exec
command and that the variables defined in the steps 2 & 4 above are being used to define the paths to bind:
$ singularity exec -B $wmpudOutside:$wmpudInside $SINGULARITY_CONTAINER wclean $ singularity exec -B $wmpudOutside:$wmpudInside $SINGULARITY_CONTAINER wmake
Thanks to the binding, the containerised OpenFOAM tools will access the intended host directory correctly and will write the compiled tools and solvers under the path indicated.
Use the variable "FOAM_USER_APPBIN" and "FOAM_USER_LIBBIN"
Besides the use of the variable WM_PROJECT_USER_DIR
and its binding to a directory in the host, OpenFOAM also uses the variables FOAM_USER_LIBBIN
or FOAM_USER_APPBIN
for solver/tool development. These variables are commonly used within the files "files
" and "options
" under the "Make
" directory of tools/solvers development structure, and maybe in some other files. If these variables are not being used correctly in users' own files, then users may receive error messages of reading/writing privileges. Users should read OpenFOAM documentation for understanding their proper use.
6-. After compilation, users can check that the compiled solver/tool exists in the host by exploring under the corresponding host path:
$ ls $wmpudOutside platforms src $ ls $wmpudOutside/platforms linux64GccDPInt32Opt $ ls $wmpudOutside/platforms/linux64GccDPInt32Opt bin lib $ ls $wmpudOutside/platforms/linux64GccDPInt32Opt/bin yourSolverFoam
or
$ singularity exec -B $wmpudOutside:$wmpudInside $SINGULARITY_CONTAINER bash -c 'ls $FOAM_USER_APPBIN' yourSolverFoam
(Note that yourSolverFoam
is just an example name.)
7-. To execute user's own compiled tool/solver, users should always keep binding the path in the host (wmpudOutside
) to the path in the container (wmpudInside
) in the singularity command by adding "-B $wmpudOutside:$wmpudInside
" to the usual Singularity+OpenFOAM commands for interactive sessions or within Slurm batch scripts. Of course, these variables need to always be defined before the singularity
command is used to call the tool/solver in a Slurm batch script or in an interactive session. For example, a simple test of the user's own solver in an interactive session would be:
$ module load openfoam-org-container/8 $ wmpudInside=$(singularity exec $SINGULARITY_CONTAINER bash -c 'echo $WM_PROJECT_USER_DIR') $ wmpudOutside=$MYSOFTWARE/OpenFOAM/$USER-8 $ singularity exec -B $wmpudOutside:$wmpudInside $SINGULARITY_CONTAINER yourSolverFoam -help Usage: yourSolverFoam [OPTIONS] options: -case <dir> specify alternate case directory, default is the cwd -fileHandler <handler> override the fileHandler -hostRoots <(((host1 dir1) .. (hostN dirN))> slave root directories (per host) for distributed running -libs <(lib1 .. libN)> pre-load libraries ... -srcDoc display source code in browser -doc display application documentation in browser -help print the usage
(Note that yourSolverFoam
is just an example name.)
8-. The use within a Slurm batch script needs to follow the same principles. For example, an adaptation of the example script for solver execution in the OpenFOAM: Example Slurm Batch Scripts would be:
#!/bin/bash --login #SBATCH --job-name=[name_of_job] #SBATCH --partition=work #SBATCH --ntasks=1152 #SBATCH --ntasks-per-node=128 #SBATCH --cpus-per-task=1 #SBATCH --exclusive #SBATCH --time=[neededTime] module load openfoam-org-container/8 #--- Specific settings for the cluster you are on #(Check the specific guide of the cluster for additional settings) # --- # Set MPI related environment variables. Not all need to be set # main variables for multi-node jobs (uncomment for multinode jobs) export MPICH_OFI_STARTUP_CONNECT=1 export MPICH_OFI_VERBOSE=1 #Ask MPI to provide useful runtime information (uncomment if debugging) #export MPICH_ENV_DISPLAY=1 #export MPICH_MEMORY_REPORT=1 #--- Automating the list of IORANKS for collated fileHandler echo "Setting the grouping ratio for collated fileHandling" nProcs=$SLURM_NTASKS #Number of total processors in decomposition for this case mGroup=32 #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix) of_ioRanks="0" iC=$mGroup while [ $iC -le $nProcs ]; do of_ioRanks="$of_ioRanks $iC" ((iC += $mGroup)) done export FOAM_IORANKS="("${of_ioRanks}")" echo "FOAM_IORANKS=$FOAM_IORANKS" #-- Defining the binding paths: wmpudInside=$(singularity exec $SINGULARITY_CONTAINER bash -c 'echo $WM_PROJECT_USER_DIR') wmpudOutside=$MYSOFTWARE/OpenFOAM/$USER-8 #-- Execute user's own solver: srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c 1 \ singularity exec -B $wmpudOutside:$wmpudInside \ $SINGULARITY_CONTAINER yourSolverFoam -parallel
Building a new image with compiled additional tools/solvers
Basically, users will need to build a new Docker image using a new Dockerfile with the building recipe. This recipe does not need to start from scratch but can start from an existing image with OpenFOAM on it, and then it only needs to copy the source files into the new image and compiles those additional tools/solvers. Currently we do not offer any module of OpenFOAM containers with additional tools, so it is up to users to build their own new container Images and use them as indicated in the following section. Nevertheless, our Git repository counts with examples of some Dockerfiles prepared to build OpenFOAM images that start from an existing tested OpenFOAM image and are then equipped with additional tools available through the tool developer's Git repositories: https://github.com/PawseySC/pawsey-containers/tree/master/OpenFOAM/installationsWithAdditionalTools.
Information in the following sections may be useful if you are looking to build a new container with additional tools.
Use of old versions of OpenFOAM not provided by Pawsey
We strongly recommend to upgrade your workflow to a recent version of OpenFOAM
First of all, it is important to reiterate that one of the most important best practices of OpenFOAM at Pawsey is to minimise the number of result files. For this, we strongly recommend users to upgrade their workflows to the most recent versions of OpenFOAM, which are capable of reducing the number of result files with the use of the collated fileHandler
. The versions provided by Pawsey are capable of using collated fileHandler
. Please adhere to the recommended OpenFOAM: Best Practices.
As mentioned in the main documentation for OpenFOAM, several recent versions of OpenFOAM containers are already available in Setonix through modules. One advantage of these modules is that OpenFOAM tools and solvers are accessible through basic commands without the need of calling singularity
explicitly in the command line. The call to singularity is performed under the hood by the tool/solver's wrappers.
If users need a new version of OpenFOAM that is still not installed in our systems, please contact support so that we can make it available to users.
If users are still forced to use an old version of OpenFoam that is not installed in our systems, we recommend to use a container equipped with the needed version. Pawsey has already built several Docker images with the main intention to serve as examples for users to builid their own images. Pawsey-provided Docker images are available through our registry in Red Hat Quay.io (https://quay.io/pawsey) in directories openfoam & openfoam-org. Other directories with the "-legacy" post-fix contain images built with operating-system versions that do not run properly on Setonix anymore. The corresponding building recipes (Dockerfiles) are at Pawsey GitHub repostory (https://github.com/PawseySC/pawsey-containers). As said, users should use these recipes as examples to build their own.
There are three requirements for containers to run MPI applications efficiently on Setonix:
- The image should be equipped with an MPICH ABI compatible MPI and OpenFOAM should be compiled with it.
- Operating system of the image should be compatible with Setonix operating system (only ubuntu 20.04 has proven to be successfull).
- The image needs to be converted from Docker format into SIngularity format (this is performed automatically by Singularity when the image is pulled).
Most native OpenFOAM containers will not run on Setonix because they are usually compiled with OpenMPI, which is not ABI compatible with Cray-MPICH. All the example images and recipe files (Dockerfiles) in the links mentioned above have been properly tested on Setonix and indeed follow the above mentioned requirements.
Images that used to work previously may not work on Setonix after upgrade to CPE 23.03
Many previous images provided for their use Pawsey Supercomputers were built FROM
the Pawsey-provided base image of MPICH with Ubuntu 16.04 or 18.04. But images based on Ubuntu 16.04 or 18.04 do not run properly on Setonix after upgrade to CPE 23.03. Then all our maintained images have been updated and rebuilt FROM
base images of MPICH with Ubuntu 20.04. Users that rely on images based on Ubuntu 16.04 or 18.04 will need to pull updated images from Pawsey registry in quay.io and/or rebuild their own FROM
base images of MPICH with Ubuntu 20.04.
For building their own images, we recommend users to use their own institution Linux installation equipped with Docker. The installation and use of Docker in your own Linux computer and the building syntax of containers are out of the scope of this documentation, but instructions can be found elsewhere. As mentioned, we strongly recommend the use of one of the recipe examples in the links above, and adapt it for users' own purposes. Once users' Docker image has been built it can be converted into Singularity format and saved on Setonix. In general, the following steps could be followed:
- Get access to (or install) Docker within your own institution Linux environment.
- Download the building recipe (Dockerfile) that is closer to the version of OpenFOAM that the user wants to build from scratch.
- Use Docker to build the image from exactly the same Dockerfile without any modification. This in order to test that "user's own image" built on user's own installation is functional. (Pass to steps 4-7 for testing.)
- Modify the recipe as desired and use Docker to build new image with the desired version of OpenFOAM.
- Test user's own image with the needed Docker and OpenFOAM commands to check if it is functional at this level.
- Convert the recently created Docker image into a Singularity image:
- Push the Docker image into user's own online registry within Docker Hub or Quay
- Pull the Docker image from the registry using SIngularity (pulling a Docker image with singularity will perform conversion automatically). Pulling can be performed on Setonix (as shown in the example below) or it can also be performed in your own Linux installation if users also count with SIngularity, which can be very handy.
- (See also information in our general documentation about pulling or building a container image)
- Move the Singularity image to a directory in users' or project's space in
/software
file system on Setonix. - Test the functionality of the image on Setonix using Singularity commands.
For users' own images, the new docker image needs to be translated into Singularity format. The usual approach is to use the singularity command in Setonix to pull the corresponding Docker image from an online registry that stores that image. In general, that Docker image would have been built by the user themselves and be stored in the user's own online registry. As mentioned above, the conversion of the image into Singularity format is performed automatically during the pull command.
In the following example we show how to pull an existing Docker image equipped with an older version of OpenFOAM that resides in Pawsey's online registry. The resulting singularity image is saved in a directory accessible to the rest of the members of user's research group (conversions may take some time, so be patient):
$ mkdir -p /software/projects/myProjectName/singularity/OpenFOAM $ cd /software/projects/myProjectName/singularity/OpenFOAM $ salloc -n 1 --mem=29600Mb -p copy salloc: Granted job allocation 67529 $ module load singularity/<version> $ srun singularity pull openfoam-org_2.4.x_pawsey.sif docker://quay.io/pawsey/openfoam-org:2.4.x INFO: Converting OCI blobs to SIF format INFO: Starting build... Getting image source signatures Copying blob 11323ed2c653 done Copying blob 0a71b4dba9db done Copying blob 874ae6ae3b00 done Copying blob 5d27d6ffcb7e done ... 2022/07/08 10:24:32 info unpack layer: sha256:cf6efca24562cd25028fcc99a4442f6062eeb62a60c7709f1454fc8bd8b0d5ea 2022/07/08 10:24:32 info unpack layer: sha256:072f30ec926604acc07e65868c407e1f0d3a3d98dd46340b04a572e55edcfaa6 2022/07/08 10:24:32 info unpack layer: sha256:9357e5bafecbf1d6af595baf24c54e2a10e182a738133a55c889293dcc7b4d10 2022/07/08 10:24:32 info unpack layer: sha256:7f6dad97028f1961f9b463b213119fea99228cda7f684aeeda213d5848ed125b ... 2022/07/08 10:25:12 info unpack layer: sha256:537ce1385428d12c4ba18c09984440f280cce516bcb65ba3531b4a2a95533cfd INFO: Creating SIF file... $ ls openfoam-org-2.4.x-pawsey.sif
Note that <version>
is just a place holder, but users should change it for one of the available versions in the system. Also, this is just an example but, in practice, users would be pulling images from their own online registry and not from Pawsey's registry. Also note the use of --mem=29600Mb
(equivalent memory as for 8 cores in the copy
partition) which is enough for this operation. (In case of an out-of-memory
error, requested memory should be increased.) The resulting singularity image is a simple file with the .sif
extension. Then, storing and management of the singularity image is performed by treating it as a file. Of course, if you already count with a functional singularity image in another system, you can simply transfer that file to a useful path within your /software
directory.
As these images are not provided as modules, there are no wrappers of the tools/solvers to be called without the explicit use of singularity
in the command line. Basically, they should be used as any other singularity container to access its own applications.
Pre-processing/Post-processing with non-installed versions
A lot of pre-processing and post-processing tools of openfoam are not parallel, so they need to be ran as single task jobs. Nevertheless it is important to request for the right amount of memory needed to manage the size of the simulation:
#!/bin/bash --login #SBATCH --job-name=[name_of_job] #SBATCH --partition=work #SBATCH --ntasks=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=1 #SBATCH --mem=[neededMemory] #This setting is important as pre-processing jobs may need more that the default memory assigned # to single task jobs #SBATCH --time=[neededTime] #-- Loading modules module load singularity/<version> #-- Defining the singularity image to use export SINGULARITY_CONTAINER=path/To/The/Image/imageName.sif #--- Specific settings for the cluster you are on #(Check the specific guide of the cluster for additional settings) #-- Execute tools: srun -N 1 -n 1 -c 1 singularity exec $SINGULARITY_CONTAINER blockMesh srun -N 1 -n 1 -c 1 singularity exec $SINGULARITY_CONTAINER setFields srun -N 1 -n 1 -c 1 singularity exec $SINGULARITY_CONTAINER decomposePar
Here, the tools blockMesh, setFields, decomposePar
are just classical examples of the many tools available in OpenFOAM.
As indicated in the script, users need to request for the right amount of memory needed which may be larger than the default for single task jobs (1790Mb). For example, if the mesh is large and needs 64 Gigabytes of memory, then use:
#SBATCH --mem=64G
Solver execution with non-installed versions
Solver execution uses a "classical" MPI slurm job script:
#!/bin/bash --login #SBATCH --job-name=[name_of_job] #SBATCH --partition=work #SBATCH --ntasks=1152 #SBATCH --ntasks-per-node=128 #SBATCH --cpus-per-task=1 #SBATCH --exclusive #SBATCH --time=[neededTime] #-- Loading modules module load singularity/<version> #-- Defining the singularity image to use export SINGULARITY_CONTAINER=path/To/The/Image/imageName.sif #--- Specific settings for the cluster you are on #(Check the specific guide of the cluster for additional settings) # --- # Set MPI related environment variables. Not all need to be set # main variables for multi-node jobs (uncomment for multinode jobs) export MPICH_OFI_STARTUP_CONNECT=1 export MPICH_OFI_VERBOSE=1 #Ask MPI to provide useful runtime information (uncomment if debugging) #export MPICH_ENV_DISPLAY=1 #export MPICH_MEMORY_REPORT=1 #-- Execute the solver: srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c 1 \ singularity exec $SINGULARITY_CONTAINER pimpleFoam -parallel
Here, the solver pimpleFoam
is just a classical example of the many solvers available in OpenFOAM. As indicated in the script, the request is for exclusive access to the node and, therefore, no specific memory request is indicated as all the memory in each node will be available to the job.
Related pages
- OpenFOAM
- OpenFOAM: Best Practices
- OpenFOAM: Example Slurm Batch Scripts
- Containers
- Pull or build a container image