...
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal 3. Explicit use of the singularity command and the SINGULARITY_CONTAINER variable to query for an environment variable from the container |
---|
| $ module load openfoam-org-container/8singularity/<VERSION>-mpi
$ export SINGULARITY_CONTAINER=/PathToTheSingularityImage/myopenfoam-8.sif
$ singularity exec $SINGULARITY_CONTAINER printenv | grep FOAM_ETC
FOAM_ETC=/opt/OpenFOAM/OpenFOAM-8/etc
$ |
|
(Note that in this example, an imaged owned by the user is being used.)
Or, if the user wishes to open an interactive session within the container:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal 4. Explicit use of the singularity command and the SINGULARITY_CONTAINER variable to open an interactive session |
---|
| $ module load openfoam-container/v2012
$ singularity shell $SINGULARITY_CONTAINER
Singularity> echo $FOAM_ETC
/opt/OpenFOAM/OpenFOAM-v2012/etc
Singularity> |
|
(Note that in this example, the image provided by the containerised module is being used.)
And, of course, the singularity
command can be used within Slurm batch scripts. So, the execution command in the example script for the solver execution in the OpenFOAM: Example Slurm Batch Scripts page can be modified to explicitly use the singularity
command:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing 1. Example Slurm batch script to run a solver with 1152 mpi tasks |
---|
| #!/bin/bash --login
#SBATCH --job-name=[name_of_job]
#SBATCH --partition=work
#SBATCH --ntasks=1152
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --exclusive
#SBATCH --time=[neededTime]
module load openfoam-org-container/7-per-task=1
#SBATCH --exclusive
#SBATCH --time=[neededTime]
# --- Load modules and define images:
# -Using the containerised container:
module load openfoam-org-container/7
# -Using user's own image:
#module load singularity/<VERSION>-mpi #Adapt <version> to the current provided version of singularity
#SINGULARITY_CONTAINER="/PathToTheSingularityImage/myopenfoam-8.sif" #Adapt path and name to the correct ones
#--- Specific settings for the cluster you are on
#(Check the specific guide of the cluster for additional settings)
# ---
# Set MPI related environment variables. Not all need to be set
# main variables for multi-node jobs (uncomment for multinode jobs)
export MPICH_OFI_STARTUP_CONNECT=1
export MPICH_OFI_VERBOSE=1
#Ask MPI to provide useful runtime information (uncomment if debugging)
#export MPICH_ENV_DISPLAY=1
#export MPICH_MEMORY_REPORT=1
#--- Automating the list of IORANKS for collated fileHandler
echo "Setting the grouping ratio for collated fileHandling"
nProcs=$SLURM_NTASKS #Number of total processors in decomposition for this case
mGroup=32 #Size of the groups for collated fileHandling (32 is the initial recommendation for Setonix)
of_ioRanks="0"
iC=$mGroup
while [ $iC -le $nProcs ]; do
of_ioRanks="$of_ioRanks $iC"
((iC += $mGroup))
done
export FOAM_IORANKS="("${of_ioRanks}")"
echo "FOAM_IORANKS=$FOAM_IORANKS"
#-- Execute the solver:
srun -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c 1 \
singularity exec $SINGULARITY_CONTAINER pimpleFoam -parallel |
|
(For the use of their own image, users should comment the line that loads the containerised module and uncomment the lines that load the singularity module and define the SINGULARITY_CONTAINER
variable as the real path to their own image. Obviously, the <VERSION>
and the real path should be adapted.)
Wrappers of the shell and exec commands
...