...
The GPU node architecture is different from that on the CPU-only nodes. The following diagram shows the connections between the CPU and GPUs on the node, which will assist with understanding recommendations for Slurm job scripts later on this page. Note that the numbering of the cores of the CPU has a slightly different order to that of the GPUs. Each GCD can access 64GB of GPU memory. This totals to 128GB per MI250X, and 256GB per standard GPU node.
Section |
---|
Column |
---|
![](https://pawsey.atlassian.net/wiki/download/thumbnails/51929056/Setonix-GPU-Node.png?version=2&modificationDate=1682054018511&cacheVersion=1&api=v2&height=400)
Figure 1. GPU node architecture. Note here the that the GPU's shown here are equivalent to a GCD (see heremore info about this is in the Setonix General Information). |
|
Each GPU node have 4 MI250X GPU cards, which in turn have 2 Graphical Compute Die (GCD), which are seen as 2 logical GPUs; so each GPU node has 8 GCDs that is equivalent to 8 slurm GPUs. On the other hand, the single AMD CPU chip has 64 cores organised in 8 groups that share the same L3 cache. Each of these L3 cache groups (or chiplets) have a direct Infinity Fabric connection with just one of the GCDs, providing optimal bandwidth. Each chiplet can communicate with other GCDs, albeit at a lower bandwidth due to the additional communication hops. (In the examples explained in the rest of this document, we use the numbering of the cores and bus IDs of the GCD to identify the allocated chiplets and GCDs, and their binding.)
Note |
---|
title | Important: GCD vs GPU and effective meaning when allocating GPU resources at Pawsey |
---|
|
A MI250x GPU card has two GCDs. Previous generations of GPUs only had 1 GCD per GPU card, so these terms could be used interchangeably. The interchangeable usage continues even though now GPUs have more than one GCD. Slurm for instance only use the GPU terminology when referring to accelerator resources, so requests such as --gres=gpu:number is equivalent to a request for a certain number of GCDs per node. On Setonix, the max number is 8. (Note that the "equivalent" option --gpus-per-node=number is not recommended as we have found some bugs with its use.) Furthermore, Pawsey DOES NOT use standard Slurm meaning for the --gres=gpu:number parameter. The meaning of this parameter has been superseeded to represent the request for a number of "allocation-packs". The new representation has been implemented to achieve best performance. Therefore, the current allocation method uses as the "allocation-pack" as the basic allocation unit and, as explained in the rest of this document, users should only request for the number of "allocation-packs" that fullfill the needs of the job. Each allocation-pack provides: - 1 whole CPU chiplet (8 CPU cores)
- ~32 GB memory (1/8 of the total available RAM)
- 1 GCD (slurm GPU) directly connected to that chiplet
|
...
Excerpt |
---|
Pawsey's way for requesting resources on GPU nodes (different to standard Slurm)The request of resources for the GPU nodes has changed dramatically. The main reason for this change has to do with Pawsey's efforts to provide a method for optimal binding of the GPUs to the CPU cores in direct physical connection for each task. For this, we decided to completely separate the options used for resource request via salloc or (#SBATCH pragmas) and the options for the use of resources during execution of the code via srun . Note |
---|
title | Request for the amount of "allocation-packs" required for the job |
---|
| With a new CLI filter that Pawsey staff had put in place for the GPU nodes, the request of resources in GPU nodes should be thought as requesting a number of "allocation-packs". Each "allocation-pack" provides: - 1 whole CPU chiplet (8 CPU cores)
- a bit less of 32 GB memory (29.44 GB of memory, to be exact, allowing some memory for the system to operate the node) = 1/8 of the total available RAM
- 1 GCD directly connected to that chiplet
For that, the request of resources only needs the number of nodes (–-nodes , -N ) and the number of allocation-packs per node (--gres=gpu:number ). The total of allocation-packs requested results from the multiplication of these two parameters. Note that the standard Slurm meaning of the second parameter IS NOT used at Pawsey. Instead, Pawsey's CLI filter interprets this parameter as: - the number of requested "allocation-packs" per node
Note that the "equivalent" option --gpus-per-node=number (which is also interpreted as the number of "allocation-packs" per node) is not recommended as we have found some bugs with its use. |
Furthermore, in the request of resources, users should not indicate any other Slurm allocation option related to memory or CPU cores. Therefore, users should not use --ntasks , --cpus-per-task , --mem , etc. in the request headers of the script ( #SBATCH directives), or in the request options given to salloc for interactive sessions. If, for some reason, the requirements for a job are indeed determined by the number of CPU cores or the amount of memory, then users should estimate the number of "allocation-packs" that cover their needs. The "allocation-pack" is the minimal unit of resources that can be managed, so that all allocation requests should be indeed multiples of this basic unit. Pawsey also has some site specific recommendations for the use/management of resources with srun command. Users should explicitly provide a list of several parameters for the use of resources by srun . (The list of these parameters is made clear in the examples below.) Users should not assume that srun will inherit any of these parameters from the allocation request. Therefore, the real management of resources at execution time is performed by the command line options provided to srun . Note that, for the case of srun , options do have the standard Slurm meaning.
Warning |
---|
title | --gpu-bind=closest may NOT work for all applications |
---|
| Within the full explicit srun options for "managing resources", there are some that help to achieve optimal binding of GPUs to their directly connected chiplet on the CPU. There are two methods to achieve this optimal binding of GPUs. So, together with the full explicit srun options, the following two methods can be used: - Include these two Slurm parameters:
--gpus-per-task=<number> together with --gpu-bind=closest - "Manual" optimal binding with the use of "two auxiliary techniques" (explained later in the main document).
The first method is simpler, but may still launch execution errors for some codes. "Manual" binding may be the only useful method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication. An example of such a code is Slate. |
The following table provides some examples that will serve as a guide for requesting resources in the GPU nodes. Most of the examples in the table provide are for typical jobs where multiple GPUs are allocated to the job as a whole but each of the tasks spawned by srun is binded and has direct access to only 1 GPU. For applications that require multiple GPUs per task, there 3 examples (*4, *5 & *7) where tasks are binded to multiple GPUs: Required Resources per Job | New "simplified" way of requesting resources | Total Allocated resources | Charge per hour | The use of full explicit srun options is now required (only the 1st method for optimal binding is listed here) |
---|
1 CPU task (single CPU thread) controlling 1 GCD (Slurm GPU) | #SBATCH --nodes=1
#SBATCH --gres=gpu:1 | 1 allocation-pack = 1 GPU, 8 CPU cores (1 chiplet), 29.44 GB CPU RAM | 64 SU | *1
export OMP_NUM_THREADS=1
srun -N 1 -n 1 -c 8 --gres=gpu:1 --gpus-per-task=1 --gpu-bind=closest <executable>
| 1 CPU task (with 14 CPU threads each) all threads controlling the same 1 GCD | #SBATCH --nodes=1
#SBATCH --gres=gpu:2
| 2 allocation-packs= 2 GPUs, 16 CPU cores (2 chiplets), 58.88 GB CPU RAM | 128 SU | *2
export OMP_NUM_THREADS=14
srun -N 1 -n 1 -c 16 --gres=gpu:1 --gpus-per-task=1 --gpu-bind=closest <executable>
| 3 CPU tasks (single thread each), each controlling 1 GCD with GPU-aware MPI communication | #SBATCH --nodes=1
#SBATCH --gres=gpu:3 | 3 allocation-packs= 3 GPUs, 24 CPU cores (3 chiplets), 88.32 GB CPU RAM | 192 SU | *3
export MPICH_GPU_SUPPORT_ENABLED=1 export OMP_NUM_THREADS=1
srun -N 1 -n 3 -c 8 --gres=gpu:3 --gpus-per-task=1 --gpu-bind=closest <executable>
| 2 CPU tasks (single thread each), each task controlling 2 GCDs with GPU-aware MPI communication | #SBATCH --nodes=1
#SBATCH --gres=gpu:4
| 4 allocation-packs= 4 GPU, 32 CPU cores (4 chiplets), 117.76 GB CPU RAM | 256 SU | *4 export MPICH_GPU_SUPPORT_ENABLED=1 export OMP_NUM_THREADS=1
srun -N 1 -n 2 -c 16 --gres=gpu:4 --gpus-per-task=2 --gpu-bind=closest <executable>
| 5 CPU tasks (with 2 CPU threads single thread each) all threads/tasks able to see all 5 GPUs | #SBATCH --nodes=1
#SBATCH --gres=gpu:5
| 5 allocation-packs= 5 GPUs, 40 CPU cores (5 chiplets), 147.2 GB CPU RAM | 320 SU | *5
export MPICH_GPU_SUPPORT_ENABLED=1
export OMP_NUM_THREADS=21
srun -N 1 -n 5 -c 8 --gres=gpu:5 --gpus-per-task=55 <executable>
| 8 CPU tasks (single thread each), each controlling 1 GCD with GPU-aware MPI communication | #SBATCH --nodes=1
#SBATCH --exclusive | 8 allocation-packs= 8 GPU, 64 CPU cores (8 chiplets), 235 GB CPU RAM | 512 SU | *6 export MPICH_GPU_SUPPORT_ENABLED=1 export OMP_NUM_THREADS=1
srun -N 1 -n 8 -c 8 --gres=gpu:8 --gpus-per-task=1 --gpu-bind=closest <executable>
| 8 CPU tasks (single thread each), each controlling 4 GCD with GPU-aware MPI communication | #SBATCH --nodes=4
#SBATCH --exclusive | 32 allocation-packs= 4 nodes, each with: 8 GPU, 64 CPU cores (8 chiplets), 235 GB CPU RAM | 2048 SU | *7 export MPICH_GPU_SUPPORT_ENABLED=1 export OMP_NUM_THREADS=1
srun -N 4 -n 8 -c 32 --gres=gpu:8 --gpus-per-task=4 --gpu-bind=closest <executable>
| 1 CPU tasks task (single thread), controlling 1 GCD but avoiding other jobs to run in the same node for ideal performance test. | #SBATCH --nodes=1
#SBATCH --exclusive | 8 allocation-packs= 8 GPU, 64 CPU cores (8 chiplets), 235 GB CPU RAM | 512 SU | *8 export OMP_NUM_THREADS=1
srun -N 1 -n 8 1 -c 8 --gres=gpu:1 --gpus-per-task=1 --gpu-bind=closest <executable>
| Notes for the request of resources: - Note that this simplified way of resource request is based on requesting a number of "allocation-packs", so that standard use of Slurm parameters for allocation should not be used for GPU resources.
- The
--nodes (-N ) option indicates the number of nodes requested to be allocated. - The
--gres=gpu:number option indicates the number of allocation-packs requested to be allocated per node. (The "equivalent" option --gpus-per-node=number is not recommended as we have found some bugs with its use.) - The
--exclusive option requests all the resources from the number of requested nodes. When this option is used, there is no need for the use of --gres=gpu:number during allocation and, indeed, its use is not recommended in this case. - Users should not include any other Slurm allocation option that may indicate some "calculation" of required memory or CPU cores. The management of resources should only be performed after allocation via
srun options. - The same simplified resource request should be used for the request of interactive sessions with
salloc . - IMPORTANT: In addition to the request parameters shown in the table, users should indeed use other Slurm request parameters related to partition, walltime, job naming, output, email, etc. (Check the examples of the full Slurm batch scripts.)
Notes for the use/management of resources with srun : - Note that, for the case of
srun , options do have the standard Slurm meaning. - The following options need to be explicitly provided to
srun and not assumed to be inherited with some default value from the allocation request:- The
--nodes (-N ) option indicates the number of nodes to be used by the srun step. - The
--ntasks (-n ) option indicates the total number of tasks to be spawned by the srun step. By default, tasks are spawned evenly across the number of allocated nodes. - The
--cpus-per-task (-c ) option should be set to multiples of 8 (whole chiplets) to guarantee that srun will distribute the resources in "allocation-packs" and then "reserving" whole chiplets per srun task, even if the real number is 1 thread per task. The real number of threads is controlled with the OMP_NUM_THREADS environment variable. - The
--gres=gpu:number option indicates the number of GPUs per node to be used by the srun step. (The "equivalent" option --gpus-per-node=number is not recommended as we have found some bugs with its use.) - The
--gpus-per-task option indicates the number of GPUs to be binded to each task spawned by the srun step via the -n option. And - Note that this option neglects sharing of the assigned GPUs to a task with other tasks. (See cases *4, *5 and *7 and their notes for non-intuitive cases.)
- And for optimal binding, the following should be used:
- The
--gpu-bind=closest indicates that the chosen GPUs to be binded to each task should be the optimal (physically closest) to the chiplet assigned to each task. - IMPORTANT: The use of
--gpu-bind=closest will assign optimal binding but may still NOT work and launch execution errors for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication. For those cases, the use of the "manual" optimal binding (method 2) is required. Method 2 is explained later in the main document.
- (*1) This is the only case where
srun may work fine with default inherited option values. Nevertheless, it is a good practice to always use full explicit options of srun to indicate the resources needed for the executable. In this case, the settings explicitly "reserve" a whole chiplet (-c 8 ) for the srun task and control the real number of threads with the OMP_NUM_THREADS environment variable. Although the use of gres=gpu, gpus-per-task & gpu-bind is reduntant in this case, we keep them for encouraging their use, which is strictly needed in the most of cases (except case *5). - (*2) The required CPU threads per task is 14 and that is controlled with the
OMP_NUM_THREADS environment variable. But still the two full chiplets (-c 16 ) are indicated for each srun task. - (*3) The settings explicitly "reserve" a whole chiplet (
-c 8 ) for each srun task. This provides "one-chiplet-long" separation among each of the CPU cores to be allocated for the tasks spawned by srun (-n 3 ). The real number of threads is controlled with the OMP_NUM_THREADS variable. The requirement of optimal binding of GPU to corresponding chiplet is indicated with the option --gpu-bind=closest . And, in order to allow GPU-aware MPI communication, the environment variable MPICH_GPU_SUPPORT_ENABLED is set to 1. - (*4) Each task needs to be in direct communication with 2 GCDs. For that, each of the CPU task reserve "two-full-chiplets". IMPORTANT: The use of of
-c 16 "reserves" a "two-chiplets-long" separation among the two CPU cores that are to be used (one for each of the srun tasks, -n 2 ). In this way, each task will be in direct communication to the two logical GPUs in the MI250X card that has optimal connection to the chiplets reserved for each task. The real number of threads is controlled with the OMP_NUM_THREADS variable. The requirement of optimal binding of GPU to corresponding chiplet is indicated with the option --gpu-bind=closest . And, in order to allow GPU-aware MPI communication, the environment variable MPICH_GPU_SUPPORT_ENABLED is set to 1. - (*5) Sometimes, the executable (and not the scheduler) performs all the management of GPUs requested, like in the case of Tensorflow distributed training, and other Machine Learning Applications. If all the management logic for the GPUs is performed by the executable, then all the available resources should be exposed to it. IMPORTANT: In this case, the option for optimal binding is not provided
--gpu-bind option should not be provided. Neither the --gpus-per-task option should be provided, as all the available GPUs are to be available to all tasks. The real number of threads is controlled with the the OMP_NUM_THREADS variable. And, in order to allow GPU-aware MPI communication, the environment variable MPICH_GPU_SUPPORT_ENABLED is is set to 1. (*These last two settings may not be necessary for aplications like Tensorflow. - (*6) All GPUs in the node are requested, which mean all the resources available in the node via the
--exclusive allocation option (there is no need to indicate the number of GPUs per node when using exclusive allocation). The use of -c 8 provides "one-chiplet-long" separation among each of the CPU cores to be allocated for the tasks spawned by srun (-n 8 ). The real number of threads is controlled with the OMP_NUM_THREADS variable. The requirement of optimal binding of GPU to corresponding chiplet is indicated with the option --gpu-bind=closest . And, in order to allow GPU-aware MPI communication, the environment variable MPICH_GPU_SUPPORT_ENABLED is set to 1. - (*7) All GPUs resources in the each node are requested , which mean all the resources available in the node via the
--exclusive allocation option (there is no need to indicate the number of GPUs per node when using exclusive allocation). Each task needs to be in direct communication with 4 GCDs. For that, each of the CPU task reserve "four-full-chiplets". IMPORTANT: The use of of -c 32 "reserves" a "four-chiplets-long" separation among the two CPU cores that are to be used per node (8 srun tasks in total, -n 8 ). In this way, each task will be in direct communication to the closest four logical GPUs in the node with respect to the chiplets reserved for each task. The The real number of threads is controlled with the the OMP_NUM_THREADS variable. The requirement of optimal binding of GPU to corresponding chiplet is indicated with the option option --gpu-bind=closest . In this way, each task will be in direct communication to the closest four logical GPUs in the node with respect to the chiplets reserved for each task. And, in order to allow GPU-aware MPI communication, the environment variable variable MPICH_GPU_SUPPORT_ENABLED is set to 1. The --gres=gpu:8 option assigns 8 GPUs per node to the srun step (32 GPUs in total as 4 nodes are being assigned). - (*8) All GPUs in the node are requested using the --
exclusive option, but only 1 CPU chiplet - 1 GPU "unit" (or allocation-pack) is used in the srun step.
General notes: - The allocation charge is for the total of allocated resources and not for the ones that are explicitly used in the execution, so all idle resources will also be charged
|
...
Note that examples above are just for quick reference and that they do not show the use of the 2nd method for optiomal binding (which may be the only way to achieve optimal binding
...
for some applications). So, the rest of this page will describe in detail both methods of optimal binding and also show full job script examples for their use on Setonix GPU nodes.
Methods to achieve optimal binding of GCDs/GPUs
As mentioned above and, as the node diagram in the top of the page suggests, the optimal placement of GCDs and CPU cores for each task is to have direct communication among the CPU chiplet and the GCD in use. So, according to the node diagram, tasks being executed in cores in Chiplet 0 should be using GPU 4 (Bus D1), tasks in Chiplet 1 should be using GPU 5 (Bus D6), etc.
...
Note |
---|
title | Use this for GPU-aware MPI codes |
---|
|
To use GPU-aware Cray MPICH, users must set the following modules and environment variables: module load craype-accel-amd-gfx90a
module load rocm/<VERSION>
export MPICH_GPU_SUPPORT_ENABLED=1
|
...
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal N. Explaining the use of the "hello_jobstep" code from an salloc session (compiling) |
---|
| $ cd $MYSCRATCH
$ git clone https://github.com/PawseySC/hello_jobstep.git
Cloning into 'hello_jobstep'...
...
Resolving deltas: 100% (41/41), done.
$ cd hello_jobstep
$ module load PrgEnv-cray craype-accel-amd-gfx90a rocm/<VERSION>
$ make hello_jobstep
CC -std=c++11 -fopenmp --rocm-path=/opt/rocm -x hip -D__HIP_ARCH_GFX90A__=1 --offload-arch=gfx90a -I/opt/rocm/include -c hello_jobstep.cpp
CC -fopenmp --rocm-path=/opt/rocm -L/opt/rocm/lib -lamdhip64 hello_jobstep.o -o hello_jobstep
|
|
...
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal N. Explaining the use the "hello_jobstep" code from an salloc session (list allocated GPUs) |
---|
| $ rocm-smi --showhw
======================= ROCm System Management Interface =======================
============================ Concise Hardware Info =============================
GPU DID GFX RAS SDMA RAS UMC RAS VBIOS BUS
0 7408 DISABLED ENABLED DISABLED 113-D65201-042 0000:C9:00.0
1 7408 DISABLED ENABLED DISABLED 113-D65201-042 0000:D1:00.0
2 7408 DISABLED ENABLED DISABLED 113-D65201-042 0000:D6:00.0
================================================================================
============================= End of ROCm SMI Log ============================== |
|
Using hello_jobstep
code for testing a non-recommended practice
In a first test, we observe what happens when no "management" parameters are given to to srun
. So, in this "non-recommended" setting, the output is:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal N. Explaining the use the "hello_jobstep" code from an salloc session ( "not recommended" use without full srun parameters) |
---|
| $ export OMP_NUM_THREADS=1; srun -N 1 -n 3 ./hello_jobstep | sort -n
MPI 000 - OMP 000 - HWT 000 - Node nid001004 - RunTime_GPU_ID 0,1,2 - ROCR_VISIBLE_GPU_ID 0,1,2 - GPU_Bus_ID c9,d1,d6
MPI 001 - OMP 000 - HWT 008001 - Node nid001004 - RunTime_GPU_ID 0,1,2 - ROCR_VISIBLE_GPU_ID 0,1,2 - GPU_Bus_ID c9,d1,d6
MPI 002 - OMP 000 - HWT 016002 - Node nid001004 - RunTime_GPU_ID 0,1,2 - ROCR_VISIBLE_GPU_ID 0,1,2 - GPU_Bus_ID c9,d1,d6 |
|
As can be seen, each MPI task have been can be assigned to a CPU core in a different chiplet. But the same chiplet by the scheduler, which is not a recommended practice. Also, all three GCDs (logical/Slurm GPUs) that have been allocated are visible to each of the tasks. Although some codes are able to deal with this kind of available resources, this is not the recommended best practice. The recommended best practice is to assign CPU tasks to different chiplets and to provide only 1 GCD per task and, even more, to provide the optimal bandwidth between CPU and GCD.
Using hello_jobstep
code for testing optimal
...
binding for a pure MPI job (single threaded) 1 GPU per task
Starting from the same allocation as above (3 "allocation-packs"), now all the parameters needed to define the correct use of resources are provided to srun
. In this case, 3 MPI tasks are to be ran (single threaded) each task making use of 1 GCD (logical/Slurm GPU). As described above, there are two methods to achieve optimal binding. The first method only uses Slurm parameters to indicate how resources are to be used by srun
. In this case:
...
Again, there is a difference is in the values of the ROCR_VISIBLE_GPU_ID
s in the results of both methods. With the first method, these values are always 0 while, in the second method, these values are the ones given by the wrapper that "manually" selects the GCDs (logical/Slurm GPUs). This difference has proven to be important and may be the reason why the "manual" binding is the only option for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication.
Example scripts for: Exclusive access to the GPU nodes
In this section, a series of example slurm job scripts are presented in order for the users to be able to use them as a point of departure for preparing their own scripts. The examples presented here make use of most of the important concepts, tools and techniques explained in the previous section, so we encourage users to take a look into that top section of this page first.
Exclusive Node Multi-GPU job: 8 GCDs (logical/Slurm GPUs), each of them controlled by one MPI task
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. This example considers a job that will make use of the 8 GCDs (logical/Slurm GPUs) on 1 node (8 "allocation-packs"). The resources request use the following two parameters:
#SBATCH --nodes=1 #1 node in this example
#SBATCH --exclusive #All resources of the node are exclusive to this job
# #8 GPUs per node (8 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
...
title | A. Method 1: Optimal binding using srun parameters |
---|
For optimal binding using srun
parameters the options "--gpus-per-task
" & "--gpu-bind=closest
" need to be used:
...
Now, let's take a look to the output after executing the script:
...
The output of the hello_jobstep
code tells us that job ran on node nid001000
and that 8 MPI tasks were spawned. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS
environment variable in the script) and can be identified with the HWT
number. Also, each of the MPI tasks has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GCD is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job).
After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the assigned GCD (logical GPU) to each of the MPI tasks is the GPU that is directly connected to that chiplet, so that binding is optimal:
- CPU core "
001
" is on chiplet:0
and directly connected to GPU with Bus_ID:D1
- CPU core "
008
" is on chiplet:1
and directly connected to GPU with Bus_ID:D6
- CPU core "
016
" is on chiplet:2
and directly connected to GPU with Bus_ID:C9
- CPU core "
024
" is on chiplet:3
and directly connected to GPU with Bus_ID:CE
- CPU core "
032
" is on chiplet:4
and directly connected to GPU with Bus_ID:D9
- CPU core "
040
" is on chiplet:5
and directly connected to GPU with Bus_ID:DE
- CPU core "
048
" is on chiplet:6
and directly connected to GPU with Bus_ID:C1
- CPU core "
056
" is on chiplet:7
and directly connected to GPU with Bus_ID:C6
According to the architecture diagram, this binding configuration is optimal.
...
This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication.
"Click" in the TAB above to read the script and output for the other method of GPU binding.
...
title | A. Method 2: "Manual" optimal binding of GPUs and chiplets |
---|
For "manual" binding, two auxiliary techniques need to be performed: 1) use of a wrapper that selects the correct GCD (logical/Slurm GPU) and 2) generate an ordered list to be used in the --cpu-bind
option of srun
:
...
Note that the wrapper for selecting the GCDs (logical GPUs) is being created with a redirection to the cat command. Also node that its name uses the SLURM_JOBID
environment variable to make this wrapper unique to this job, and that the wrapper is deleted when execution is finalised.
Now, let's take a look to the output after executing the script:
...
The output of the hello_jobstep
code tells us that job ran on node nid001000
and that 8 MPI tasks were spawned. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS
environment variable in the script) and can be identified with the HWT
number. Also, each of the MPI tasks has only 1 visible GCD (logical/Slurm GPU). The hardware identification is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job).
After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that affinity is optimal:
...
Using hello_jobstep
code for testing visibility of all allocated GPUs to each of the tasks
Some codes, like tensorflow and other machine learning engines, require visibility of all GPU resources for an internal-to-the-code management of resources. In that case, optimal binding cannot be provided to the code and then the responsability of optimal binding and communication among the resources is given completely to the code. In that case, the recommended settings for the srun
command are:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal N. Explaining the use the "hello_jobstep" code from an salloc session ( "not recommended" use without full srun parameters) |
---|
| $ export OMP_NUM_THREADS=1; srun -N 1 -n 3 -c 8 --gres=gpu:3 ./hello_jobstep | sort -n
MPI 000 - OMP 000 - HWT 000 - Node nid001004 - RunTime_GPU_ID 0,1,2 - ROCR_VISIBLE_GPU_ID 0,1,2 - GPU_Bus_ID c9,d1,d6
MPI 001 - OMP 000 - HWT 008 - Node nid001004 - RunTime_GPU_ID 0,1,2 - ROCR_VISIBLE_GPU_ID 0,1,2 - GPU_Bus_ID c9,d1,d6
MPI 002 - OMP 000 - HWT 016 - Node nid001004 - RunTime_GPU_ID 0,1,2 - ROCR_VISIBLE_GPU_ID 0,1,2 - GPU_Bus_ID c9,d1,d6 |
|
As can be seen, each MPI task is assigned to a different chiplet. Also, all three GCDs (logical/Slurm GPUs) that have been allocated are visible to each of the tasks which, for these codes, is what they need to run properly.
Example scripts for: Exclusive access to the GPU nodes with optimal binding
In this section, a series of example slurm job scripts are presented in order for the users to be able to use them as a point of departure for preparing their own scripts. The examples presented here make use of most of the important concepts, tools and techniques explained in the previous section, so we encourage users to take a look into that top section of this page first.
Exclusive Node Multi-GPU job: 8 GCDs (logical/Slurm GPUs), each of them controlled by one MPI task
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. This example considers a job that will make use of the 8 GCDs (logical/Slurm GPUs) on 1 node (8 "allocation-packs"). The resources request use the following two parameters:
#SBATCH --nodes=1 #1 node in this example
#SBATCH --exclusive #All resources of the node are exclusive to this job
# #8 GPUs per node (8 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs |
---|
Ui tab |
---|
title | A. Method 1: Optimal binding using srun parameters |
---|
| For optimal binding using srun parameters the options "--gpus-per-task " & "--gpu-bind=closest " need to be used: 900px
bashEmacsListing N. exampleScript_1NodeExclusive_8GPUs_bindMethod1.shtrue
Now, let's take a look to the output after executing the script: 900px
bashDJangoTerminal N. Output for 8 GPUs job exclusive access
The output of the hello_jobstep code tells us that job ran on node nid001000 and that 8 MPI tasks were spawned. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS environment variable in the script) and can be identified with the HWT number. Also, each of the MPI tasks has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GCD is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job). After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the assigned GCD (logical GPU) to each of the MPI tasks is the GPU that is directly connected to that chiplet, so that binding is optimal: - CPU core "
001 " is on chiplet:0 and directly connected to GPU with Bus_ID:D1 - CPU core "
008 " is on chiplet:1 and directly connected to GPU with Bus_ID:D6 - CPU core "
016 " is on chiplet:2 and directly connected to GPU with Bus_ID:C9 - CPU core "
024 " is on chiplet:3 and directly connected to GPU with Bus_ID: C1CE - CPU core "
063 032 " is on chiplet:74 and directly connected to GPU with Bus_ID:C6D9 - CPU core "
018 040 " is on chiplet:25 and directly connected to GPU with Bus_ID:C9DE - CPU core "
026 048 " is on chiplet:36 and directly connected to GPU with Bus_ID:CEC1 - CPU core "
006 056 " is on chiplet:07 and directly connected to GPU with Bus_ID:D1 - CPU core "
013 " is on chiplet:1 and directly connected to GPU with Bus_ID:D6 - CPU core "
033 " is on chiplet:4 and directly connected to GPU with Bus_ID:D9 - CPU core "
047 " is on chiplet:5 and directly connected to GPU with Bus_ID:DE C6
According to the architecture diagram, this binding configuration is optimal. Method 1 may fail for some applications.This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication. "Click" in the TAB above to read the script and output for the other method of GPU binding. |
|
...
Ui tab |
---|
title | A. Method 2: "Manual" optimal binding of GPUs and chiplets |
---|
| For "manual" binding, two auxiliary techniques need to be performed: 1) use of a wrapper that selects the correct GCD (logical/Slurm |
|
...
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. The same procedure mentioned above for the single exclusive node job should be applied for multi-node exclusive jobs. The only difference when requesting resources is the number of exclusive nodes requested. So, for example, for a job requiring 2 exclusive nodes (16 GCDs (logical/Slurm GPUs) or 16 "allocation-packs") the resources request use the following two parameters:
#SBATCH --nodes=2 #2 nodes in this example
#SBATCH --exclusive #All resources of the node are exclusive to this job
# #8 GPUs per node (16 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs |
---|
Ui tab |
---|
title | B. Method 1: Optimal binding using srun parameters |
---|
| For optimal binding using srun parameters the options "--gpus-per-task " & "--gpu-bind=closest " need to be used: 900pxbashEmacsListing N. exampleScript_2NodesExclusive_16GPUs_bindMethod1.shtrueNow, let's take a look to the output after executing the script: 900pxbashDJangoTerminal N. Output for 16 GPUs job (2 nodes) exclusive accessAccording to the architecture diagram, this binding configuration is optimal. Method 1 may fail for some applications.This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication.GPU) and 2) generate an ordered list to be used in the --cpu-bind option of srun : 900px
bashEmacsListing N. exampleScript_1NodeExclusive_8GPUs_bindMethod2.shtrue
Note that the wrapper for selecting the GCDs (logical GPUs) is being created with a redirection to the cat command. Also node that its name uses the SLURM_JOBID environment variable to make this wrapper unique to this job, and that the wrapper is deleted when execution is finalised. Now, let's take a look to the output after executing the script: 900px
bashDJangoTerminal N. Output for 8 GPUs job exclusive access
The output of the hello_jobstep code tells us that job ran on node nid001000 and that 8 MPI tasks were spawned. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS environment variable in the script) and can be identified with the HWT number. Also, each of the MPI tasks has only 1 visible GCD (logical/Slurm GPU). The hardware identification is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job). After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that affinity is optimal: - CPU core "
054 " is on chiplet:6 and directly connected to GPU with Bus_ID:C1 - CPU core "
063 " is on chiplet:7 and directly connected to GPU with Bus_ID:C6 - CPU core "
018 " is on chiplet:2 and directly connected to GPU with Bus_ID:C9 - CPU core "
026 " is on chiplet:3 and directly connected to GPU with Bus_ID:CE - CPU core "
006 " is on chiplet:0 and directly connected to GPU with Bus_ID:D1 - CPU core "
013 " is on chiplet:1 and directly connected to GPU with Bus_ID:D6 - CPU core "
033 " is on chiplet:4 and directly connected to GPU with Bus_ID:D9 - CPU core "
047 " is on chiplet:5 and directly connected to GPU with Bus_ID:DE
"Click" in the TAB above to read the script and output for the other method of GPU binding. | Ui tab |
---|
title | B. Method 2: "Manual" optimal binding of GPUs and chiplets |
---|
| For "manual" binding, two auxiliary techniques need to be performed: 1) use of a wrapper and 2) generate an ordered list to be used in the --cpu-bind option of srun : 900pxbashEmacsListing N. exampleScript_2NodesExclusive_16GPUs_bindMethod2.shtrueNote that the wrapper for selecting the GPUs is being created with a redirection to the cat command. Also node that its name uses the SLURM_JOBID environment variable to make this wrapper unique to this job, and that the wrapper is deleted when execution is finalised. Now, let's take a look to the output after executing the script: 900pxbashDJangoTerminal N. Output for 16 GPUs job (2 nodes) exclusive accessAccording to the architecture diagram, this binding configuration is optimal. "Click" in the TAB above to read the script and output for the other method of GPU binding.
|
Example scripts for: Shared access to the GPU nodes
Shared node 1 GPU job
Jobs that need only 1 GCD (logical/Slurm GPU) for their execution are going to be sharing the GPU node with other jobs. That is, they will run in shared access, which is the default so no request for exclusive access is performed.
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 1 allocation-pack with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:1 #1 GPU per node (1 "allocation-pack" in total for the job)
...
N Exclusive Nodes Multi-GPU job: 8*N GCDs (logical/Slurm GPUs), each of them controlled by one MPI task
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. The same procedure mentioned above for the single exclusive node job should be applied for multi-node exclusive jobs. The only difference when requesting resources is the number of exclusive nodes requested. So, for example, for a job requiring 2 exclusive nodes (16 GCDs (logical/Slurm GPUs) or 16 "allocation-packs") the resources request use the following two parameters:
#SBATCH --nodes=2 #2 nodes in this example
#SBATCH --exclusive #All resources of the node are exclusive to this job
# #8 GPUs per node (16 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As only 1 allocation-pack is requested, there is no need to take any other action for optimal binding of CPU chiplet and GPU as it is guaranteed:
...
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing N. exampleScript_1NodeShared_1GPU.sh |
---|
linenumbers | true |
---|
|
#!/bin/bash --login
#SBATCH --job-name=1GPUSharedNode
#SBATCH --partition=gpu
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:1 #1 GPU per node (1 "allocation-pack" in total for the job)
#SBATCH --time=00:05:00
#SBATCH --account=<yourProject>-gpu #IMPORTANT: use your own project and the -gpu suffix
#(Note that there is not request for exclusive access to the node)
#----
#Loading needed modules (adapt this for your own purposes):
module load PrgEnv-cray
module load rocm craype-accel-amd-gfx90a
echo -e "\n\n#------------------------#"
module list
#----
#Printing the status of the given allocation
echo -e "\n\n#------------------------#"
echo "Printing from scontrol:"
scontrol show job ${SLURM_JOBID}
#----
#Definition of the executable (we assume the example code has been compiled and is available in $MYSCRATCH):
exeDir=$MYSCRATCH/hello_jobstep
exeName=hello_jobstep
theExe=$exeDir/$exeName
#----
#MPI & OpenMP settings
#Not needed for 1GPU:export MPICH_GPU_SUPPORT_ENABLED=1 #This allows for GPU-aware MPI communication among GPUs
export OMP_NUM_THREADS=1 #This controls the real CPU-cores per task for the executable
#----
#Execution
#Note: srun needs the explicit indication full parameters for use of resources in the job step.
# These are independent from the allocation parameters (which are not inherited by srun)
# For optimal GPU binding using slurm options,
# "--gpus-per-task=1" and "--gpu-bind=closest" create the optimal binding of GPUs
# (Although in this case this can be avoided as only 1 "allocation-pack" has been requested)
echo -e "\n\n#------------------------#"
echo "Test code execution:"
srun -l -u -N 1 -n 1 -c 8 --gres=gpu:1 ${theExe} | sort -n
#----
#Printing information of finished job steps:
echo -e "\n\n#------------------------#"
echo "Printing information of finished jobs steps using sacct:"
sacct -j ${SLURM_JOBID} -o jobid%20,Start%20,elapsed%20
#----
#Done
echo -e "\n\n#------------------------#"
echo "Done" |
And the output after executing this example is:
...
...
language | bash |
---|
theme | DJango |
---|
title | Terminal N. Output for a 1 GPU job (using only 1 allocation-pack in a shared node) |
---|
...
mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs |
---|
Ui tab |
---|
title | B. Method 1: Optimal binding using srun parameters |
---|
| For optimal binding using srun parameters the options "--gpus-per-task " & "--gpu-bind=closest " need to be used: 900px
bashEmacsListing N. exampleScript_2NodesExclusive_16GPUs_bindMethod1.shtrue
Now, let's take a look to the output after executing the script: 900px
bashDJangoTerminal N. Output for 16 GPUs job (2 nodes) exclusive access
According to the architecture diagram, this binding configuration is optimal. Method 1 may fail for some applications.This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication. "Click" in the TAB above to read the script and output for the other method of GPU binding. |
Ui tab |
---|
title | B. Method 2: "Manual" optimal binding of GPUs and chiplets |
---|
| For "manual" binding, two auxiliary techniques need to be performed: 1) use of a wrapper and 2) generate an ordered list to be used in the --cpu-bind option of srun : 900px
bashEmacsListing N. exampleScript_2NodesExclusive_16GPUs_bindMethod2.shtrue
Note that the wrapper for selecting the GPUs is being created with a redirection to the cat command. Also node that its name uses the SLURM_JOBID environment variable to make this wrapper unique to this job, and that the wrapper is deleted when execution is finalised. Now, let's take a look to the output after executing the script: 900px
bashDJangoTerminal N. Output for 16 GPUs job (2 nodes) exclusive access
According to the architecture diagram, this binding configuration is optimal. "Click" in the TAB above to read the script and output for the other method of GPU binding. |
|
Example scripts for: Shared access to the GPU nodes with optimal binding
Shared node 1 GPU job
Jobs that need only 1 GCD (logical/Slurm GPU) for their execution are going to be sharing the GPU node with other jobs. That is, they will run in shared access, which is the default so no request for exclusive access is performed.
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 1 allocation-pack with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:1 #1 GPU per node (1 "allocation-pack" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As only 1 allocation-pack is requested, there is no need to take any other action for optimal binding of CPU chiplet and GPU as it is guaranteed:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing N. exampleScript_1NodeShared_1GPU.sh |
---|
linenumbers | true |
---|
| #!/bin/bash --login
#SBATCH --job-name=1GPUSharedNode
#SBATCH --partition=gpu
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:1 #1 GPU per node (1 "allocation-pack" in total for the job)
#SBATCH --time=00:05:00
#SBATCH --account=<yourProject>-gpu #IMPORTANT: use your own project and the -gpu suffix
#(Note that there is not request for exclusive access to the node)
#----
#Loading needed modules (adapt this for your own purposes):
module load PrgEnv-cray
module load rocm/<VERSION> craype-accel-amd-gfx90a
echo -e "\n\n#------------------------#"
module list
#----
#Printing the status of the given allocation
echo -e "\n\n#------------------------#"
echo "Printing from scontrol:"
scontrol show job ${SLURM_JOBID}
#----
#Definition of the executable (we assume the example code has been compiled and is available in $MYSCRATCH):
exeDir=$MYSCRATCH/hello_jobstep
exeName=hello_jobstep
theExe=$exeDir/$exeName
#----
#MPI & OpenMP settings
#Not needed for 1GPU:export MPICH_GPU_SUPPORT_ENABLED=1 #This allows for GPU-aware MPI communication among GPUs
export OMP_NUM_THREADS=1 #This controls the real CPU-cores per task for the executable
#----
#Execution
#Note: srun needs the explicit indication full parameters for use of resources in the job step.
# These are independent from the allocation parameters (which are not inherited by srun)
# For optimal GPU binding using slurm options,
# "--gpus-per-task=1" and "--gpu-bind=closest" create the optimal binding of GPUs
# (Although in this case this can be avoided as only 1 "allocation-pack" has been requested)
# "-c 8" is used to force allocation of 1 task per CPU chiplet. Then, the REAL number of threads
# for the code SHOULD be defined by the environment variables above.
# (The "-l" option is for displaying, at the beginning of each line, the taskID that generates the output.)
# (The "-u" option is for unbuffered output, so that output is displayed as soon as it's generated.)
# (If the output needs to be sorted for clarity, then add "| sort -n" at the end of the command.)
echo -e "\n\n#------------------------#"
echo "Test code execution:"
srun -l -u -N 1 -n 1 -c 8 --gres=gpu:1 --gpus-per-task=1 --gpu-bind=closest ${theExe}
#----
#Printing information of finished job steps:
echo -e "\n\n#------------------------#"
echo "Printing information of finished jobs steps using sacct:"
sacct -j ${SLURM_JOBID} -o jobid%20,Start%20,elapsed%20
#----
#Done
echo -e "\n\n#------------------------#"
echo "Done" |
|
And the output after executing this example is:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal N. Output for a 1 GPU job (using only 1 allocation-pack in a shared node) |
---|
| $ sbatch exampleScript_1NodeShared_1GPU.sh
Submitted batch job 323098
$ cat slurm-323098.out
...
#------------------------#
Test code execution:
0: MPI 000 - OMP 000 - HWT 002 - Node nid001004 - RunTime_GPU_ID 0 - ROCR_VISIBLE_GPU_ID 0 - GPU_Bus_ID d1
...
#------------------------#
Done |
|
The output of the hello_jobstep
code tells us that the CPU-core "002
" and GPU with Bus_ID:D1
were utilised by the job. Optimal binding is guaranteed for a single "allocation-pack" as memory, CPU chiplet and GPU of each pack is optimal.
Shared node 3 MPI tasks each controlling 1 GCD (logical/Slurm GPU)
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 3 allocation-packs with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:3 #3 GPUs per node (3 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs |
---|
Ui tab |
---|
title | C. Method 1: Optimal binding using srun parameters |
---|
| For optimal binding using srun parameters the options "--gpus-per-task " & "--gpu-bind=closest " need to be used: 900px
bashEmacsListing N. exampleScript_1NodeShared_3GPUs_bindMethod1.shtrue
Now, let's take a look to the output after executing the script: 900px
bashDJangoTerminal N. Output for 3 GPUs job shared access. Method 1 for optimal binding.
The output of the hello_jobstep code tells us that job ran on node nid001004 and that 3 MPI tasks were spawned. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS environment variable in the script) and can be identified with the HWT number. Also, each of the MPI tasks has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GPU is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job). After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal: - CPU core "
001 " is on chiplet:0 and directly connected to GCD (logical GPU) with Bus_ID:D1 - CPU core "
008 " is on chiplet:1 and directly connected to GCD (logical GPU) with Bus_ID:D6 - CPU core "
016 " is on chiplet:2 and directly connected to GCD (logical GPU) with Bus_ID:C9
According to the architecture diagram, this binding configuration is optimal. Method 1 may fail for some applications.This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication. "Click" in the TAB above to read the script and output for the other method of GPU binding. |
Ui tab |
---|
title | C. Method 2: "Manual" optimal binding of GPUs and chiplets |
---|
| For "manual" binding, two auxiliary techniques need to be performed: 1) use of a wrapper t and 2) generate an ordered list to be used in the --cpu-bind option of srun : 900px
bashEmacsListing N. exampleScript_1NodeShared_3GPUs_bindMethod2.shtrue
Note that the wrapper for selecting the GCDs (logical/Slurm GPUs) is being created with a redirection to the cat command. Also node that its name uses the SLURM_JOBID environment variable to make this wrapper unique to this job, and that the wrapper is deleted when execution is finalised. Now, let's take a look to the output after executing the script: 900px
bashDJangoTerminal N. Output for 3 GPUs job shared access. "Manual" method (method 2) for optimal binding.
The output of the hello_jobstep code tells us that job ran on node nid001004 and that 3 MPI tasks were spawned. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS environment variable in the script) and can be identified with the HWT number. Also, each of the MPI tasks has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GPU is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job). After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal:
- CPU core "
019 " is on chiplet:2 and directly connected to GCD (logical GPU) with Bus_ID:C9 - CPU core "
002 " is on chiplet:0 and directly connected to GCD (logical GPU) with Bus_ID:D1 - CPU core "
009 " is on chiplet:1 and directly connected to GCD (logical GPU) with Bus_ID:D6
According to the architecture diagram, this binding configuration is optimal. "Click" in the TAB above to read the script and output for the other method of GPU binding. |
|
Example scripts for: Hybrid jobs (multiple threads) on the CPU side
When the code is hybrid on the CPU side (MPI + OpenMP) the logic is similar to the above examples, except that more than 1 CPU-core chiplet needs to be accessible per srun
task. This is controlled by the OMP_NUM_THREADS
environment variable and will also imply a change in the settings for the optimal binding of resources when the "manual" binding (method 2) is applied.
In the following example, we use 3 GCDs (logical/slurm GPUs) (1 per MPI task) and the number of CPU threads per task is 5. As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 3 allocation-packs with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:3 #3 GPUs per node (3 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header. And the real number of threads per task is controlled with:
export OMP_NUM_THREADS=5 #This controls the real CPU-cores per task for the executable
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs |
---|
Ui tab |
---|
title | D. Method 1: Optimal binding using srun parameters |
---|
| For optimal binding using srun parameters the options "--gpus-per-task " & "--gpu-bind=closest " need to be used: 900px
bashEmacsListing N. exampleScript_1NodeShared_Hybrid5CPU_3GPUs_bindMethod1.shtrue
Now, let's take a look to the output after executing the script: 900px
bashDJangoTerminal N. Output for hybrid job with 3 tasks each with 5 CPU threads and 1 GPU shared access. Method 1 for optimal binding.
The output of the hello_jobstep code tells us that job ran on node nid001004 and that 3 MPI tasks were spawned. Each of the MPI tasks has only 5 CPU-core assigned to it (with the use of the OMP_NUM_THREADS environment variable in the script) and can be identified with the HWT number. Also, each of the threads has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GPU is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job). After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal. Method 1 may fail for some applications.This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication. "Click" in the TAB above to read the script and output for the other method of GPU binding. |
Ui tab |
---|
title | D. Method 2: "Manual" optimal binding of GPUs and chiplets |
---|
|
Use mask_cpu for hybrid jobs on the CPU side instead of map_cpuFor hybrid jobs on the CPU side use mask_cpu for the cpu-bind option and NOT map_cpu . Also, control the number of CPU threads per task with OMP_NUM_THREADS . For "manual" binding, two auxiliary techniques need to be performed: 1) use of a wrapper and 2) generate an ordered list to be used in the --cpu-bind option of srun . In this case, the list needs to be created using the mask_cpu parameter: 900px
bashEmacsListing N. exampleScript_1NodeShared_Hybrid5CPU_3GPUs_bindMethod2.shtrue
Note that the wrapper for selecting the GPUs (logical/Slurm GPUs) is being created with a redirection to the cat command. Also node that its name uses the SLURM_JOBID environment variable to make this wrapper unique to this job, and that the wrapper is deleted when execution is finalised. Now, let's take a look to the output after executing the script: 900px
bashDJangoTerminal N. Output for hybrid job with 3 tasks each with 5 CPU threads and 1 GPU shared access. "Manual" method (method 2) for optimal binding.
The output of the hello_jobstep code tells us that job ran on node nid001004 and that 3 MPI tasks were spawned. Each of the MPI tasks has 5 CPU-core assigned to it (with the use of the OMP_NUM_THREADS environment variable in the script) and can be identified with the HWT number. Also, each thread has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GPU is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job). After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal. "Click" in the TAB above to read the script and output for the other method of GPU binding. |
|
Example scripts for: Jobs where each task needs access to multiple GPUs
Exclusive nodes: all 8 GPUs in each node accessible to all 8 tasks in the node
Some applications, like Tensorflow and other Machine Learning applications, may requiere access to all the available GPUs in the node. In this case, the optimal binding and communication cannot be granted by the scheduler when assigning resources to the srun
launcher. Then, the full responsability for the optimal use of the resources relies on the code itself.
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. This example considers a job that will make use of the 8 GCDs (logical/Slurm GPUs) on 2 nodes (16 "allocation-packs" in total). The resources request use the following two parameters:
#SBATCH --nodes=2 #2 nodes in this example
#SBATCH --exclusive #All resources of each node are exclusive to this job
# #8 GPUs per node (16 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, optimal binding cannot be achieved by the scheduler, so no settings for optimal binding are given to the launcher. Also, all the GPUs in the node are available to each of the tasks:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing N. exampleScript_2NodesExclusive_16GPUs_8VisiblePerTask.sh |
---|
linenumbers | true |
---|
| #!/bin/bash --login
#SBATCH --job-name=16GPUExclusiveNode-8GPUsVisiblePerTask
#SBATCH --partition=gpu
#SBATCH --nodes=2 #2 nodes in this example
#SBATCH --exclusive #All resources of the node are exclusive to this job
# #8 GPUs per node (16 "allocation packs" in total for the job)
#SBATCH --time=00:05:00
#SBATCH --account=<yourProject>-gpu #IMPORTANT: use your own project and the -gpu suffix
#----
#Loading needed modules (adapt this for your own purposes):
#For the hello_jobstep example:
module load PrgEnv-cray
module load rocm/<VERSION> craype-accel-amd-gfx90a
#OR for a tensorflow example:
#module load tensorflow/<version>
echo -e "\n\n#------------------------#"
module list
#----
#Printing the status of the given allocation
echo -e "\n\n#------------------------#"
echo "Printing from scontrol:"
scontrol show job ${SLURM_JOBID}
#----
#Definition of the executable (we assume the example code has been compiled and is available in $MYSCRATCH):
exeDir=$MYSCRATCH/hello_jobstep
exeName=hello_jobstep
theExe=$exeDir/$exeName
#----
#MPI & OpenMP settings if needed (these won't work for Tensorflow):
export MPICH_GPU_SUPPORT_ENABLED=1 #This allows for GPU-aware MPI communication among GPUs
export OMP_NUM_THREADS=1 #This controls the real CPU-cores per task for the executable
#----
#TensorFlow settings if needed:
# The following two variables control the real number of threads in Tensorflow code:
#export TF_NUM_INTEROP_THREADS=1 #Number of threads for independent operations
#export TF_NUM_INTRAOP_THREADS=1 #Number of threads within individual operations
#----
#Execution
#Note: srun needs the explicit indication full parameters for use of resources in the job step.
# These are independent from the allocation parameters (which are not inherited by srun)
# Each task needs access to all the 8 available GPUs in the node where it's running.
# So, no optimal binding can be provided by the scheduler.
# Therefore, "--gpus-per-task" and "--gpu-bind" are not used.
# Optimal use of resources is now responsability of the code.
# "-c 8" is used to force allocation of 1 task per CPU chiplet. Then, the REAL number of threads
# for the code SHOULD be defined by the environment variables above.
# (The "-l" option is for displaying, at the beginning of each line, the taskID that generates the output.)
# (The "-u" option is for unbuffered output, so that output is displayed as soon as it's generated.)
# (If the output needs to be sorted for clarity, then add "| sort -n" at the end of the command.)
echo -e "\n\n#------------------------#"
echo "Test code execution:"
srun -l -u -N 2 -n 16 -c 8 --gres=gpu:8 ${theExe}
#srun -l -u -N 2 -n 16 -c 8 --gres=gpu:8 python3 ${tensorFlowScript}
#----
#Printing information of finished job steps:
echo -e "\n\n#------------------------#"
echo "Printing information of finished jobs steps using sacct:"
sacct -j ${SLURM_JOBID} -o jobid%20,Start%20,elapsed%20
#----
#Done
echo -e "\n\n#------------------------#"
echo "Done" |
|
And the output after executing this example is:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal N. Output for a 16 GPU job with 16 tasks each of the task accessing the 8 GPUs in their running node |
---|
| $ sbatch exampleScript_2NodesExclusive_16GPUs_8VisiblePerTask.sh
Submitted batch job 7798215
$ cat slurm-7798215.out
...
#------------------------#
Test code execution:
0: MPI 000 - OMP 000 - HWT 001 - Node nid002944 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
1: MPI 001 - OMP 000 - HWT 008 - Node nid002944 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
2: MPI 002 - OMP 000 - HWT 016 - Node nid002944 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
3: MPI 003 - OMP 000 - HWT 024 - Node nid002944 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
4: MPI 004 - OMP 000 - HWT 032 - Node nid002944 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
5: MPI 005 - OMP 000 - HWT 040 - Node nid002944 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
6: MPI 006 - OMP 000 - HWT 049 - Node nid002944 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
7: MPI 007 - OMP 000 - HWT 056 - Node nid002944 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
8: MPI 008 - OMP 000 - HWT 000 - Node nid002946 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
9: MPI 009 - OMP 000 - HWT 008 - Node nid002946 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
10: MPI 010 - OMP 000 - HWT 016 - Node nid002946 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
11: MPI 011 - OMP 000 - HWT 025 - Node nid002946 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
12: MPI 012 - OMP 000 - HWT 032 - Node nid002946 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
13: MPI 013 - OMP 000 - HWT 040 - Node nid002946 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
14: MPI 014 - OMP 000 - HWT 048 - Node nid002946 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
15: MPI 015 - OMP 000 - HWT 002056 - Node nid001004nid002946 - RunTime_GPU_ID 0,1,2,3,4,5,6,7 - ROCR_VISIBLE_GPU_ID 0,1,2,3,4,5,6,7 - GPU_Bus_ID c1,c6,c9,ce,d1,d6,d9,de
...
#------------------------#
Done |
|
The output of the hello_jobstep
code tells us that the CPU-core "002
" and GPU with Bus_ID:D1
were utilised by the job. Optimal binding is guaranteed for a single "allocation-pack" as memory, CPU chiplet and GPU of each pack is optimal.
Shared node 3 MPI tasks each controlling 1 GCD (logical/Slurm GPU)
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 3 allocation-packs with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:3 #3 GPUs per node (3 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
...
title | C. Method 1: Optimal binding using srun parameters |
---|
For optimal binding using srun
parameters the options "--gpus-per-task
" & "--gpu-bind=closest
" need to be used:
...
Now, let's take a look to the output after executing the script:
...
The output of the hello_jobstep
code tells us that job ran on node nid001004
and that 3 MPI tasks were spawned. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS
environment variable in the script) and can be identified with the HWT
number. Also, each of the MPI tasks has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GPU is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job).
After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal:
- CPU core "
001
" is on chiplet:0
and directly connected to GCD (logical GPU) with Bus_ID:D1
- CPU core "
008
" is on chiplet:1
and directly connected to GCD (logical GPU) with Bus_ID:D6
- CPU core "
016
" is on chiplet:2
and directly connected to GCD (logical GPU) with Bus_ID:C9
According to the architecture diagram, this binding configuration is optimal.
...
This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication.
"Click" in the TAB above to read the script and output for the other method of GPU binding.
...
title | C. Method 2: "Manual" optimal binding of GPUs and chiplets |
---|
For "manual" binding, two auxiliary techniques need to be performed: 1) use of a wrapper t and 2) generate an ordered list to be used in the --cpu-bind
option of srun
:
...
Note that the wrapper for selecting the GCDs (logical/Slurm GPUs) is being created with a redirection to the cat command. Also node that its name uses the SLURM_JOBID
environment variable to make this wrapper unique to this job, and that the wrapper is deleted when execution is finalised.
Now, let's take a look to the output after executing the script:
...
The output of the hello_jobstep
code tells us that job ran on node nid001004
and that 3 MPI tasks were spawned. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS
environment variable in the script) and can be identified with the HWT
number. Also, each of the MPI tasks has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GPU is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job).
After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal:
- CPU core "
019
" is on chiplet:2
and directly connected to GCD (logical GPU) with Bus_ID:C9
- CPU core "
002
" is on chiplet:0
and directly connected to GCD (logical GPU) with Bus_ID:D1
- CPU core "
009
" is on chiplet:1
and directly connected to GCD (logical GPU) with Bus_ID:D6
According to the architecture diagram, this binding configuration is optimal.
"Click" in the TAB above to read the script and output for the other method of GPU binding.
Example scripts for: Hybrid jobs (multiple threads) on the CPU side
When the code is hybrid on the CPU side (MPI + OpenMP) the logic is similar to the above examples, except that more than 1 CPU-core chiplet needs to be accessible per srun
task. This is controlled by the OMP_NUM_THREADS
environment variable and will also imply a change in the settings for the optimal binding of resources when the "manual" binding (method 2) is applied.
In the following example, we use 3 GCDs (logical/slurm GPUs) (1 per MPI task) and the number of CPU threads per task is 5. As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. In this case we ask for 3 allocation-packs with:
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:3 #3 GPUs per node (3 "allocation-packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header. And the real number of threads per task is controlled with:
export OMP_NUM_THREADS=5 #This controls the real CPU-cores per task for the executable
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, there are two methods for achieving optimal binding. The method that uses only srun
parameters is preferred (method 1), but may not always work and, in that case, the "manual" method (method 2) may be needed. The two scripts for the different methods for optimal binding are in the following tabs:
Ui tabs |
---|
Ui tab |
---|
title | D. Method 1: Optimal binding using srun parameters |
---|
| For optimal binding using srun parameters the options "--gpus-per-task " & "--gpu-bind=closest " need to be used: 900pxbashEmacsListing N. exampleScript_1NodeShared_Hybrid5CPU_3GPUs_bindMethod1.shtrueNow, let's take a look to the output after executing the script: 900pxbashDJangoTerminal N. Output for hybrid job with 3 tasks each with 5 CPU threads and 1 GPU shared access. Method 1 for optimal binding.The output of the hello_jobstep code tells us that job ran on node nid001004 and that 3 MPI tasks were spawned. Each of the MPI tasks has only 5 CPU-core assigned to it (with the use of the OMP_NUM_THREADS environment variable in the script) and can be identified with the HWT number. Also, each of the threads has only 1 visible GCD (logical/Slurm GPU). The hardware identification of the GPU is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job). After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal. Method 1 may fail for some applications.This first method is simpler, but may not work for all codes. "Manual" binding (method 2) may be the only reliable method for codes relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication. "Click" in the TAB above to read the script and output for the other method of GPU binding. Ui tab |
---|
title | D. Method 2: "Manual" optimal binding of GPUs and chiplets |
---|
| Use mask_cpu for hybrid jobs on the CPU side instead of map_cpu For hybrid jobs on the CPU side use mask_cpu for the cpu-bind option and NOT map_cpu . Also, control the number of CPU threads per task with OMP_NUM_THREADS . For "manual" binding, two auxiliary techniques need to be performed: 1) use of a wrapper and 2) generate an ordered list to be used in the --cpu-bind option of srun . In this case, the list needs to be created using the mask_cpu parameter: 900pxbashEmacsListing N. exampleScript_1NodeShared_Hybrid5CPU_3GPUs_bindMethod2.shtrueNote that the wrapper for selecting the GPUs (logical/Slurm GPUs) is being created with a redirection to the cat command. Also node that its name uses the SLURM_JOBID environment variable to make this wrapper unique to this job, and that the wrapper is deleted when execution is finalised. Now, let's take a look to the output after executing the script: 900pxbashDJangoTerminal N. Output for hybrid job with 3 tasks each with 5 CPU threads and 1 GPU shared access. "Manual" method (method 2) for optimal binding.--------#
Done |
|
The output of the hello_jobstep
code tells us that job ran 8 MPI tasks on node nid002944
and other 8 MPI tasks on node nid002946
. Each of the MPI tasks has only 1 CPU-core assigned to it (with the use of the OMP_NUM_THREADS
environment variable in the script) and can be identified with the HWT
number. Clearly, each of the CPU tasks run on a different chiplet.
More importantly for this example, each of the MPI tasks have access to the 8 GCDs (logical/Slurm GPU) in their node. Proper and optimal GPU management and communication is responsability of the code. The hardware identification is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job).
Shared nodes: Many GPUs requested but 2 GPUs binded to each task
Some applications may requiere each of the spawned task to have access to multiple GPUs. In this case, some optimal binding and communication can still be granted by the scheduler when assigning resources with the srun
launcher. Although final responsability for the optimal use of the multiple GPUs assigned to each task relies on the code itself.
As for all scripts, we provide the parameters for requesting the necessary "allocation-packs" for the job. This example considers a job that will make use of the 6 GCDs (logical/Slurm GPUs) on 1 node (6 "allocation-packs" in total). The resources request use the following two parameters:
#SBATCH --nodes=1 #1 node in this example
#SBATCH --gres=gpu:6 #6 GPUs per node (6 "allocation packs" in total for the job)
Note that only these two allocation parameters are needed to provide the information for the requested number of allocation-packs, and no other parameter related to memory or CPU cores should be provided in the request header.
The use/management of the allocated resources is controlled by the srun
options and some environmental variables. As mentioned above, some best binding can still be achieved by the scheduler providing 2 GPUs to each of the tasks:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing N. exampleScript_1NodeShared_6GPUs_2VisiblePerTask.sh |
---|
linenumbers | true |
---|
| #!/bin/bash --login
#SBATCH --job-name=6GPUSharedNode-2GPUsVisiblePerTask
#SBATCH --partition=gpu
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --gres=gpu:6 #6 GPUs per node (6 "allocation packs" in total for the job)
#SBATCH --time=00:05:00
#SBATCH --account=<yourProject>-gpu #IMPORTANT: use your own project and the -gpu suffix
#----
#Loading needed modules (adapt this for your own purposes):
module load PrgEnv-cray
module load rocm/<VERSION> craype-accel-amd-gfx90a
echo -e "\n\n#------------------------#"
module list
#----
#Printing the status of the given allocation
echo -e "\n\n#------------------------#"
echo "Printing from scontrol:"
scontrol show job ${SLURM_JOBID}
#----
#Definition of the executable (we assume the example code has been compiled and is available in $MYSCRATCH):
exeDir=$MYSCRATCH/hello_jobstep
exeName=hello_jobstep
theExe=$exeDir/$exeName
#----
#MPI & OpenMP settings if needed (these won't work for Tensorflow):
export MPICH_GPU_SUPPORT_ENABLED=1 #This allows for GPU-aware MPI communication among GPUs
export OMP_NUM_THREADS=1 #This controls the real CPU-cores per task for the executable
#----
#Execution
#Note: srun needs the explicit indication full parameters for use of resources in the job step.
# These are independent from the allocation parameters (which are not inherited by srun)
# For best possible GPU binding using slurm options,
# "--gpus-per-task=2" and "--gpu-bind=closest" will provide the best GPUs to the tasks.
# But best is still not optimal.
# Each task have access to 2 available GPUs in the node where it's running.
# Optimal use of resources of each of the 2GPUs accesible per task is now responsability of the code.
# IMPORTANT: Note the use of "-c 16" to "reserve" 2 chiplets per task and is consistent with
# the use of "--gpus-per-task=2" to "reserve" 2 GPUs per task. Then, the REAL number of
# threads for the code SHOULD be defined by the environment variables above.
# (The "-l" option is for displaying, at the beginning of each line, the taskID that generates the output.)
# (The "-u" option is for unbuffered output, so that output is displayed as soon as it's generated.)
# (If the output needs to be sorted for clarity, then add "| sort -n" at the end of the command.)
echo -e "\n\n#------------------------#"
echo "Test code execution:"
srun -l -u -N 1 -n 3 -c 16 --gres=gpu:6 --gpus-per-task=2 --gpu-bind=closest ${theExe}
#----
#Printing information of finished job steps:
echo -e "\n\n#------------------------#"
echo "Printing information of finished jobs steps using sacct:"
sacct -j ${SLURM_JOBID} -o jobid%20,Start%20,elapsed%20
#----
#Done
echo -e "\n\n#------------------------#"
echo "Done" |
|
And the output after executing this example is:
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | DJango |
---|
title | Terminal N. Output for a 6 GPU job with 3 tasks and 2 GPUs per task |
---|
| $ sbatch exampleScript_1NodeShared_6GPUs_2VisiblePerTask.sh
Submitted batch job 7842635
$ cat slurm-7842635.out
...
#------------------------#
Test code execution:
0: MPI 000 - OMP 000 - HWT 000 - Node nid002948 - RunTime_GPU_ID 0,1 - ROCR_VISIBLE_GPU_ID 0,1 - GPU_Bus_ID d1,d6
1: MPI 001 - OMP 000 - HWT 016 - Node nid002948 - RunTime_GPU_ID 0,1 - ROCR_VISIBLE_GPU_ID 0,1 - GPU_Bus_ID c9,ce
2: MPI 002 - OMP 000 - HWT 032 - Node nid002948 - RunTime_GPU_ID 0,1 - ROCR_VISIBLE_GPU_ID 0,1 - GPU_Bus_ID d9,de
...
#------------------------#
Done |
|
The output of the hello_jobstep
code tells us that job ran
...
3 MPI tasks
...
on node nid002948
. Each of the MPI tasks has
...
only 1 CPU-core assigned to it (with the use of
...
the OMP_NUM_THREADS
environment variable in the script) and can be identified with the HWT
number
...
. Clearly, each of the CPU tasks run on a different chiplet. But more important, the spacing of the chiplets is every 16 cores (two chiplets), thanks to the "-c 16
" setting in the srun
command, allowing for the best binding of the 2 GPUs assigned to each task.
More importantly for this example, each of the MPI tasks have access to 2 GCDs (logical/Slurm GPU) in their node. The hardware identification
...
is done via the Bus_ID (as the other GPU_IDs are not physical but relative to the job).
...
After checking the architecture diagram at the top of this page, it can be clearly seen that each of the assigned CPU-cores for the job is on a different L3 cache group chiplet (slurm-socket). But more importantly, it can be seen that the binding is optimal.
...
The assigned GPUs are indeed the 2 closest to the CPU core, as can be verified with the architecture diagram provided at the top of this page. Final proper and optimal GPU management and communication is responsability of the code.
Example scripts for: Packing GPU jobs
Packing the execution of 8 independent instances each using 1 GCD (logical/Slurm GPU)
...
Column |
---|
|
Code Block |
---|
language | bash |
---|
theme | Emacs |
---|
title | Listing N. exampleScript_1NodeExclusive_8GPUs_jobPacking.sh |
---|
linenumbers | true |
---|
| #!/bin/bash --login
#SBATCH --job-name=JobPacking8GPUsExclusive-bindMethod1
#SBATCH --partition=gpu
#SBATCH --nodes=1 #1 nodes in this example
#SBATCH --exclusive #All resources of the node are exclusive to this job
# #8 GPUs per node (8 "allocation-packs" in total for the job)
#SBATCH --time=00:05:00
#SBATCH --account=<yourProject>-gpu #IMPORTANT: use your own project and the -gpu suffix
#----
#Loading needed modules (adapt this for your own purposes):
module load PrgEnv-cray
module load rocm craype-accel-amd-gfx90a
echo -e "\n\n#------------------------#"
module list in this example
#SBATCH --exclusive #All resources of the node are exclusive to this job
# #8 GPUs per node (8 "allocation-packs" in total for the job)
#SBATCH --time=00:05:00
#SBATCH --account=<yourProject>-gpu #IMPORTANT: use your own project and the -gpu suffix
#----
#Loading needed modules (adapt this for your own purposes):
module load PrgEnv-cray
module load rocm/<VERSION> craype-accel-amd-gfx90a
echo -e "\n\n#------------------------#"
module list
#----
#Printing the status of the given allocation
echo -e "\n\n#------------------------#"
echo "Printing from scontrol:"
scontrol show job ${SLURM_JOBID}
#----
#Job Packing Wrapper: Each srun-task will use a different instance of the executable.
jobPackingWrapper="jobPackingWrapper.sh"
#----
#MPI & OpenMP settings
#No need for 1GPU steps:export MPICH_GPU_SUPPORT_ENABLED=1 #This allows for GPU-aware MPI communication among GPUs
export OMP_NUM_THREADS=1 #This controls the real CPU-cores per task for the executable
#----
#Printing#Execution
the#Note: statussrun ofneeds the givenexplicit indication allocationfull echoparameters -e "\n\n#------------------------#"
echo "Printing from scontrol:"
scontrol show job ${SLURM_JOBID}
#----
#Job Packing Wrapper: Each srun-task will use a different instance of the executable.
jobPackingWrapper="jobPackingWrapper.sh"
#----
#MPI & OpenMP settings
#No need for 1GPU steps:export MPICH_GPU_SUPPORT_ENABLED=1 #This allows for GPU-aware MPI communication among GPUs
export OMP_NUM_THREADS=1 for use of resources in the job step.
# These are independent from the allocation parameters (which are not inherited by srun)
# "-c 8" is used to force allocation of 1 task per CPU chiplet. Then, the REAL number of threads
# for #This controls the realcode CPU-coresSHOULD perbe taskdefined forby the environment executablevariables above.
#---- #Execution #Note: srun needs the explicit indication full parameters for use(The "-l" option is for displaying, at the beginning of resourceseach inline, the job step. taskID that generates the output.)
# These are independent from the allocation parameters (which are not inherited by srun(The "-u" option is for unbuffered output, so that output is displayed as soon as it's generated.)
echo -e "\n\n#------------------------#"
echo "Test code execution:"
srun -l -u -N 1 -n 8 -c 8 --gres=gpu:8 --gpus-per-task=1 --gpu-bind=closest ./${jobPackingWrapper}
#----
#Printing information of finished job steps:
echo -e "\n\n#------------------------#"
echo "Printing information of finished jobs steps using sacct:"
sacct -j ${SLURM_JOBID} -o jobid%20,Start%20,elapsed%20
#----
#Done
echo -e "\n\n#------------------------#"
echo "Done" |
|
...