...
Section | ||
---|---|---|
|
Each GPU node have 4 MI250X GPU cards, which in turn have 2 Graphical Compute Die (GCD), which are seen as 2 logical GPUs; so each GPU node has 8 GCDs that is equivalent to 8 slurm GPUs. On the other hand, the single AMD CPU chip has 64 cores organised in 8 groups that share the same L3 cache. Each of these L3 cache groups (or chiplets) have a direct Infinity Fabric connection with just one of the GCDs, providing optimal bandwidth. Each chiplet can communicate with other GCDs, albeit at a lower bandwidth due to the additional communication hops. (In the examples explained in the rest of this document, we use the numbering of the cores and bus IDs of the GCD to identify the allocated chiplets and GCDs, and their binding.)
...
Excerpt | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pawsey's way for requesting resources on GPU nodes (different to standard Slurm)The request of resources for the GPU nodes has changed dramatically. The main reason for this change has to do with Pawsey's efforts to provide a method for optimal binding of the GPUs to the CPU cores in direct physical connection for each task. For this, we decided to completely separate the options used for resource request via
Furthermore, in the request of resources, users should not indicate any other Slurm allocation option related to memory or CPU cores. Therefore, users should not use Pawsey also has some site specific recommendations for the use/management of resources with
The following table provides some examples that will serve as a guide for requesting resources in the GPU nodes (those interested in cases . Most of the examples in the table provide are for typical jobs where multiple GPUs are accessible by 1 or more tasks, pay attention to cases 4,5 & 7)allocated to the job as a whole but each of the tasks spawned by
Notes for the request of resources:
Notes for the use/management of resources with
General notes:
|
...
Note that examples above are just for quick reference and that they do not show the use of the 2nd method for optiomal binding (which may be the only way to achieve optimal binding for some applications). So, the rest of this page will describe in detail both methods of optimal binding and also show full job script examples for their use on Setonix GPU nodes.
Methods to achieve optimal binding of GCDs/GPUs
As mentioned above and, as the node diagram in the top of the page suggests, the optimal placement of GCDs and CPU cores for each task is to have direct communication among the CPU chiplet and the GCD in use. So, according to the node diagram, tasks being executed in cores in Chiplet 0 should be using GPU 4 (Bus D1), tasks in Chiplet 1 should be using GPU 5 (Bus D6), etc.
...
Note | ||
---|---|---|
| ||
To use GPU-aware Cray MPICH, users must set the following modules and environment variables:
|
...
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
...
Column | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||
|
And the output after executing this example is:
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
The output of the hello_jobstep
code tells us that the CPU-core "002
" and GPU with Bus_ID:D1
were utilised by the job. Optimal binding is guaranteed for a single "allocation-pack" as memory, CPU chiplet and GPU of each pack is optimal.
Shared node 3 MPI tasks each controlling 1 GCD (logical/Slurm GPU)
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 3 allocation-packs with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:3 #3 GPUs per node (3 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
...
title | C. Method 1: Optimal binding using srun parameters |
---|
For optimal binding using srun
parameters the options "--gpus-per-task
" & "--gpu-bind=closest
" need to be used:
...
Now, let's take a look to the output after executing the script:
...
And the output after executing this example is:
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
The output of the hello_jobstep
code tells us that
...
the
...
CPU-core
...
"002
" and GPU with Bus_ID:D1
were utilised by the job. Optimal binding is guaranteed for a single "allocation-pack" as memory, CPU chiplet and GPU of each pack is optimal.
Shared node 3 MPI tasks each controlling 1 GCD (logical/Slurm GPU)
...
After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal:
- CPU core "
001
" is onchiplet:0
and directly connected to GCD (logical GPU) withBus_ID:D1
- CPU core "
008
" is onchiplet:1
and directly connected to GCD (logical GPU) withBus_ID:D6
- CPU core "
016
" is onchiplet:2
and directly connected to GCD (logical GPU) withBus_ID:C9
According to the architecture diagram, this binding configuration is optimal.
...
This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication.
...
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 3 allocation-packs with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:3 #3 GPUs per node (3 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs | |||||||
---|---|---|---|---|---|---|---|
|
Example scripts for: Hybrid jobs (multiple threads) on the CPU side
When the code is hybrid on the CPU side (MPI + OpenMP) the logic is similar to the above examples, except that more than 1 CPU-core chiplet needs to be accessible per srun
task. This is controlled by the OMP_NUM_THREADS
environment variable and will also imply a change in the settings for the optimal binding of resources when the "manual" binding (method 2) is applied.
In the following example, we use 3 GCDs (logical/slurm GPUs) (1 per MPI task) and the number of CPU threads per task is 5. As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 3 allocation-packs with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:3 #3 GPUs per node (3 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header. And the real number of threads per task is controlled with:
export OMP_NUM_THREADS=5 #This controls the real CPU-cores per task for the executable
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Example scripts for: Hybrid jobs (multiple threads) on the CPU side
When the code is hybrid on the CPU side (MPI + OpenMP) the logic is similar to the above examples, except that more than 1 CPU-core chiplet needs to be accessible per srun
task. This is controlled by the OMP_NUM_THREADS
environment variable and will also imply a change in the settings for the optimal binding of resources when the "manual" binding (method 2) is applied.
In the following example, we use 3 GCDs (logical/slurm GPUs) (1 per MPI task) and the number of CPU threads per task is 5. As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 3 allocation-packs with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:3 #3 GPUs per node (3 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header. And the real number of threads per task is controlled with:
export OMP_NUM_THREADS=5 #This controls the real CPU-cores per task for the executable
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Example scripts for: Jobs where each task needs access to multiple GPUs
Exclusive nodes: all 8 GPUs in each node accessible to all 8 tasks in the node
Some applications, like Tensorflow and other Machine Learning applications, may requiere access to all the available GPUs in the node. In this case, the optimal binding and communication cannot be granted by the scheduler when assigning resources to the srun
launcher. Then, the full responsability for the optimal use of the resources relies on the code itself.
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. This example considers a job that will make use of the 8 GCDs (logical/Slurm GPUs) on 2 nodes (16 "allocation-packs" in total). The resources request use the following two parameters:
#SBATCH --nodes=2 #2 nodes in this example
#SBATCH --exclusive #All resources of each node are exclusive to this job
# #8 GPUs per node (16 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, optimal binding cannot be achieved by the scheduler, so no settings for optimal binding are given to the launcher. Also, all the GPUs in the node are available to each of the tasks:
Column | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||
|
And the output after executing this example is:
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
The output of the hello_jobstep
code tells us that job ran 8 MPI tasks on node nid002944
and other 8 MPI tasks on node nid002946
. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS
environment variable in the script) and can be identified with the HWT
number. Clearly, each of the CPU tasks run on a different chiplet.
More importantly for this example, each of the MPI tasks have access to the 8 GCDs (logical/Slurm GPU) in their node. Proper and optimal GPU management and communication is responsability of the code. The hardware identification is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job).
Shared nodes: Many GPUs requested but 2 GPUs binded to each task
Some applications may requiere each of the spawned task to have access to multiple GPUs. In this case, some optimal binding and communication can still be granted by the scheduler when assigning resources with the srun
launcher. Although final responsability for the optimal use of the multiple GPUs assigned to each task relies on the code itself.
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. This example considers a job that will make use of the 6 GCDs (logical/Slurm GPUs) on 1 node (6 "allocation-packs" in total). The resources request use the following two parameters:
#SBATCH --nodes=1 #1 node in this example
#SBATCH --gres=gpu:6 #6 GPUs per node (6 "allocation packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, some best binding can still be achieved by the scheduler providing 2 GPUs to each of the tasks:
Column | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||
|
And the output after executing this example is:
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
The output of the hello_jobstep
code tells us that job ran
...
3 MPI tasks
...
on node nid002948
. Each of the MPI tasks has
...
only 1 CPU-core assigned to it (with the use of
...
the OMP_NUM_THREADS
environment variable in the script) and can be identified with the HWT
number
...
. Clearly, each of the CPU tasks run on a different chiplet. But more important, the spacing of the chiplets is every 16 cores (two chiplets), thanks to the "-c 16
" setting in the srun
command, allowing for the best binding of the 2 GPUs assigned to each task.
More importantly for this example, each of the MPI tasks have access to 2 GCDs (logical/Slurm GPU) in their node. The hardware identification
...
is done via the Bus_ID (as the other GPU_IDs are not physical but relative to
...
After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal.
...
the job). The assigned GPUs are indeed the 2 closest to the CPU core, as can be verified with the architecture diagram provided at the top of this page. Final proper and optimal GPU management and communication is responsability of the code.
Example scripts for: Packing GPU jobs
Packing the execution of 8 independent instances each using 1 GCD (logical/Slurm GPU)
...
Column | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||
|
...
- Setonix User Guide
- Example Slurm Batch Scripts for Setonix on CPU Compute Nodes
- Setonix General Information: GPU node architecture