Insert excerpt |
---|
| TensorFlow |
---|
| TensorFlow |
---|
name | WarningAfterJune2024 |
---|
nopanel | true |
---|
|
...
FROM quay.io/pawsey/tensorflow:2.12.1.570-rocm5.6.0
To pull the image to your local desktop with Docker you can use:
$ docker pull quay.io/pawsey/tensorflow:2.12.1.570-rocm5.6.0
To know more about our recommendations of container builds with Docker and later translation into Singularity format for their use in Setonix please refer to the Containers Documentation.
...
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing 1. runTensorflow.sh : An example batch script to run a TensorFlow distributed training job. |
---|
| #!/bin/bash --login
#SBATCH --job-name=tensorflow_multiGPU
#SBATCH --partition=gpu
#SBATCH --nodes=2 #2 nodes in this example
#SBATCH --exclusive #All resources of the node are exclusive to this job
# #8 GPUs per node (16 "allocation packs" in total for the job)
#SBATCH --time=00:05:00
#SBATCH --account=pawsey12345-gpu #IMPORTANT: use your own project and the -gpu suffix
#----
#Loading needed modules:
module load tensorflow/<version> #Adapt this line for the correct version
echo -e "\n\n#------------------------#"
module list
#----
#Printing the status of the given allocation
echo -e "\n\n#------------------------#"
echo "Printing from scontrol:"
scontrol show job ${SLURM_JOBID}
#----
#If additional python packages have been installed in user's own virtual environment
VENV_PATH=$MYSOFTWARE/manual/software/pythonEnvironments/tensorflowContainer-environments/myenv
#----
#Definition of the python script containing the tensorflow training case
PYTHON_SCRIPT_DIR=$MYSRATCH/matilda-machinelearning/models
PYTHON_SCRIPT=$PYTHON_SCRIPT_DIR/00_myTensorflowScript.py
#----
#TensorFlow settings if needed:
# The following two variables control the real number of threads in Tensorflow code:
export TF_NUM_INTEROP_THREADS=1 #Number of threads for independent operations
export TF_NUM_INTRAOP_THREADS=1 #Number of threads within individual operations
#----
#Execution
#Note: srun needs the explicit indication full parameters for use of resources in the job step.
# These are independent from the allocation parameters (which are not inherited by srun)
# Each task needs access to all the 8 available GPUs in the node where it's running.
# So, no optimal binding can be provided by the scheduler.
# Therefore, "--gpus-per-task" and "--gpu-bind" are not used.
# Optimal use of resources is now responsability of the code.
# "-c 8" is used to force allocation of 1 task per CPU chiplet. Then, the REAL number of threads
# for the code SHOULD be defined by the environment variables above.
echo -e "\n\n#------------------------#"
echo "Code execution:"
#When using a virtual environment:
srun -N 2 -n 16 -c 8 --gres=gpu:8 bash -c "source $VENV_PATH/bin/activate && python3 $PYTHON_SCRIPT"
#When no virtual environement is needed:
#srun -N 2 -n 16 -c 8 --gres=gpu:8 python3 $PYTHON_SCRIPT
#----
#Printing information of finished job steps:
echo -e "\n\n#------------------------#"
echo "Printing information of finished jobs steps using sacct:"
sacct -j ${SLURM_JOBID} -o jobid%20,Start%20,elapsed%20
#----
#Done
echo -e "\n\n#------------------------#"
echo "Done" |
|
...