OpenFOAM: Example Slurm Batch Scripts

Before executing an OpenFOAM job at Pawsey

Please adhere to the recommended best practices

Complying with the best practices listed in this documentation will keep our file systems working with optimal performance. Please read OpenFOAM: Best Practices

Example Slurm batch scripts

Pre-processing/Post-processing

A lot of pre-processing and post-processing tools of openfoam are not parallel, so they need to be ran as single task jobs. Nevertheless it is important to request for the right amount of memory needed to manage the size of the simulation:


Listing 1. Example jobscript to run pre-processing with single task tools
#!/bin/bash --login
 
#SBATCH --job-name=[name_of_job]
#SBATCH --partition=work
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=[neededMemory]  #It's important to assign enough memory
#SBATCH --time=[neededTime]

#--- Load neccessary modules and list them:
module load openfoam-org-container/7
module list

#--- Specific settings for the cluster you are on
#(Check the specific guide of the cluster for additional settings)


#--- Automating the list of IORANKS for collated fileHandler
echo "Setting the grouping ratio for collated fileHandling"
nProcs=1152 #Number of total processors in decomposition for this case
mGroup=32   #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix)
of_ioRanks="0"
iC=$mGroup
while [ $iC -le $nProcs ]; do
   of_ioRanks="$of_ioRanks $iC"
   ((iC += $mGroup))
done
export FOAM_IORANKS="("${of_ioRanks}")"
echo "FOAM_IORANKS=$FOAM_IORANKS"

#--- Execute tools:
srun -N 1 -n 1 -c 1 blockMesh
srun -N 1 -n 1 -c 1 setFields
srun -N 1 -n 1 -c 1 decomposePar 

The tools blockMesh, setFields, decomposePar are just classical examples of the many tools available in OpenFOAM.
As indicated in the script, users need to request for the right amount of memory needed for their preprocessing/postprocessing. For example, if the mesh is of regular size and and needs 64 Gigabytes of memory, then use:

#SBATCH --mem=64G

or, if the whole memory of the node is needed, then you can use instead:

#SBATCH --exclusive

And, if even more memory is needed, then you can submit the preprocessing job to the highmem partition:

#SBATCH --partition=highmem

In the above example, a containerised module is being used. As explained in the main page, when using an OpenFOAM container offered as a module, there is no need to explicitly invoke Singularity commands, so the same kind of OpenFOAM commands would work. Containerised modules have been configured so that the names of all OpenFOAM tools (now wrappers) will invoke Singularity and the actual containerised tools under the hood. For the use of explicit Singularity commands with these containers check: OpenFOAM: Advance use of containerised modules and external containers. When using a bare-metal module, the use of OpenFOAM commands would look the same as in the example above.

Solver execution

Solver execution uses a "classical" MPI slurm job script:


Listing 2. Example Slurm batch script to run a solver with 1152 mpi tasks
#!/bin/bash --login
 
#SBATCH --job-name=[name_of_job]
#SBATCH --partition=work
#SBATCH --ntasks=1152
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=[neededTime]

#--- Load neccessary modules and list them: 
module load openfoam-org-container/7
module list

#--- Specific settings for the cluster you are on
#(Check the specific guide of the cluster for additional settings)

#--- MPI settings:
# Set MPI related environment variables. Not all need to be set
# main variables for multi-node jobs (uncomment for multinode jobs)
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1
#Ask MPI to provide useful runtime information (uncomment if debugging)
#export MPICH_ENV_DISPLAY=1
#export MPICH_MEMORY_REPORT=1

#--- Automating the list of IORANKS for collated fileHandler
echo "Setting the grouping ratio for collated fileHandling"
nProcs=$SLURM_NTASKS #Number of total processors in decomposition for this case
mGroup=32            #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix)
of_ioRanks="0"
iC=$mGroup
while [ $iC -le $nProcs ]; do
   of_ioRanks="$of_ioRanks $iC"
   ((iC += $mGroup))
done
export FOAM_IORANKS="("${of_ioRanks}")"
echo "FOAM_IORANKS=$FOAM_IORANKS"

#--- Execute the solver:
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c 1 pimpleFoam -parallel

The solver pimpleFoam is just a classical example of the many solvers available in OpenFOAM. As indicated in the script, the request is for exclusive access to the node and, therefore, no specific memory request is indicated as all the memory in each node will be available to the job.

In the above example, a containerised module is being used. As explained in the main page, when using an OpenFOAM container offered as a module, there is no need to explicitly invoke Singularity commands, so the same kind of OpenFOAM commands would work. Containerised modules have been configured so that the names of all OpenFOAM tools (now wrappers) will invoke Singularity and the actual containerised tools under the hood. For the use of explicit Singularity commands with these containers check: OpenFOAM: Advance use of containerised modules and external containers. When using a bare-metal module, the use of OpenFOAM commands would look the same as in the example above.

Related pages