...
Insert excerpt
...
TensorFlow
...
TensorFlow
...
name
...
title | Tensorflow module has known issues |
---|
...
WarningAfterJune2024 nopanel true
In this tutorial, you are going to see how to write a Horovod-powered distributed TensorFlow computation. More specifically, the final goal is to train different models in parallel by assigning each of them to a different GPU. The discussion is organised in two sections. The first section illustrates Horovod's basic concepts and its usage coupled with TensorFlow, the second one uses the MNIST classification task as test case.
...
Column | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||
|
The first two lines are convenient function calls to retrieve the rank and local rank of the process, which are then logged for demonstration purposes. Next, each process retrieves the list of GPUs that are available on the node it is running on. Of course, processes on the same node will retrieve the same list, whereas any two processes running on different nodes will have different, non overlapping, sets of GPUs. In the latter case, resource contention is structurally impossible; it is in the former case that the local rank concept comes handy. Each process uses its local rank as index to select a GPU in the gpus
list and will not share it with any other processes because:
...
The last function call sets the GPU Tensorflow will use for each process.
Try using as a template, To test what we have written so far, use the batch job script runTensorflow.sh
provided in the previous page as a template for submitting the job. You will need to adapt it for the batch job script and remove the exclusive
option to change the number of GPUs per node and to to 2 in the request of resources together with changes in the srun
command, and use of the python script porposed here: (01_horovod_mnist.py
) containing the two parts described above. The adapted lines of the batch job script should look like:
#SBATCH --nodes=2 #2 nodes in this example
#SBATCH --gres=gpu:2 #2 GPUS per node
.
.
PYTHON_SCRIPT=$PYTHON_SCRIPT_DIR/01_horovod_mnist.py
.
.
srun -N 2 -n 4 -c 8 --gres=gpu:2 python3 $PYTHON_SCRIPT
...
Column | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||
|
...