Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The GPU partition of Setonix is made up of 192 nodes, 38 of which are high memory nodes (512 GB RAM instead of 256GB). Each GPU node features 4 AMD MI250X GPUs, as depicted in Figure 1. Each MI250X comprises 2 Graphics Complex Die (GCD), with each effectively seen as a standalone GPU by the system. A 64-core AMD Trento CPU is connected to the four MI250X with the AMD InfinityFabric interconnect, the same interconnection between the GPU cards, with a peak bandwidth of 200Gb/s. For more information refer to the Setonix General Information. Each GCD can access 64GB of GPU memory. This totals to 128GB per MI250X, and 256GB per standard GPU node. 

Figure 1. A GPU node of Setonix

...

Several scientific applications are already able to offload computations to the MI250X, many others are in the process of being ported to AMD GPUs. Here is a list of the main ones and their current status.

NameAMD GPU AccelerationModule on Setonix
AmberYesYes
GromacsYesYes
LAMMPSYesYes
NAMDYes
NekRSYes
PyTorchYesYes*
ROMSNo
TensorflowYesYes*

Table 1. List of popular applications applications. * indicates module is a container as module. 

Module names of AMD GPU applications end with the postfix amd-gfx90a. The most accurate list is given by the module  command:

...

Popular numerical routines and functions have been implemented by AMD to run on their GPU hardware. All of the following are available when loading the rocm/5.0.2  module  modules.

NameDescription
rocFFTFast Fourier Transform. Documentation pages (external site).
rocBLASrocBLAS is the AMD library for Basic Linear Algebra Subprograms (BLAS) on the ROCm platform. Documentation pages (external site).
rocSOLVERrocSOLVER is a work-in-progress implementation of a subset of LAPACK functionality on the ROCm platform. Documentation pages (external site).

...

The default ROCm installation is rocm/5.02.23  provided by HPE Cray. In addition, Pawsey staff have installed the more recent versions up to  rocm/5.47.3  from source using ROCm-from-source. It is an experimental installation and users might encounter compilation or linking errors. You are encouraged to explore it during development and to report any issues. For production jobs, however, we currently recommend using rocm/5.0.2.. We recommend the use of the latest available version unless it creates troubles in your code. Available versions can be checked with the command:

module avail rocm.

Submitting Jobs

You can submit GPU jobs to the gpu, gpu-dev and gpu-highmem Slurm partitions using your GPU allocation.

...

If you are using ROCm libraries, such as rocFFT, to offload computations to GPUs, you should be able to use any compiler to link those to your code.

For HIP code as well as one use hipcc. And, for code making use of OpenMP offloading, you must use:

  • hipcc for c/c++
  • ftn (wrapper for cray-fortran from PrgEnv-cray) for fortran.

...

  • This compiler also allows GPU offloading with OpenACC.

When using hipcc, note that the location of the MPI headers and libraries that are usually not automatically included by (contrary to the automatic inclusion when using the Cray wrapper scripts must also be provided ti ). Therefore, if your code also requires MPI, the location of the MPI headers and libraries must be provided to hipcc as well as the GPU Transport Layer libraries:

...

Column
width900px


Code Block
languagebash
themeEmacs
titleMPI environment variable for GPU-GPU communication
export MPICH_GPU_SUPPORT_ENABLED=1

OpenACC for Fortran codes is implemented in the Cray Fortran compiler.



Accounting

Each MI250X GCD, which corresponds to a Slurm GPU, is charged 64 SU per hour. This means the use of an entire GPU node is charged 512 SU per hour. In general, a job is charged the largest proportion of core, memory, or GPU usage rounded up to 1/8ths of a node (corresponding to an individual MI250X GCD). Note that GPU node usage is accounted against GPU allocations with the -gpu suffix, which are separate to CPU allocations.

...

Column
width900px


Code Block
languagebash
themeEmacs
titleExample 1 : One process with a single GPU using shared node access
linenumberstrue
#!/bin/bash --login

#SBATCH --account=project-gpu
#SBATCH --partition=gpu
#SBATCH --nodes=1              #1 nodes in this example
#SBATCH --gpus-per-node=1gres=gpu:1           #1 GPUsGPU per node (1 "allocation packs-pack" in total for the job)
#SBATCH --time=00:05:00

#----
#Loading needed modules (adapt this for your own purposes):
module load PrgEnv-cray
module load rocm craype-accel-amd-gfx90a
module list

#----
#MPI & OpenMP settings
export OMP_NUM_THREADS=1 #This controls the real number of threads per task

#----
#Execution
srun -N 1 -n 1 -c 8 --gpus-per-node=gres=gpu:1 ./program


Code Block
languagebash
themeEmacs
titleExample 2 : Single CPU process that use the eight GPUs of the node
linenumberstrue
#!/bin/bash --login

#SBATCH --account=project-gpu
#SBATCH --partition=gpu
#SBATCH --nodes=1              #1 nodes in this example
#SBATCH --exclusive            #All resources of the node are exclusive to this job
#                              #8 GPUs per node (8 "allocation -packs" in total for the job)
#SBATCH --time=00:05:00

#----
#Loading needed modules (adapt this for your own purposes):
module load PrgEnv-cray
module load rocm craype-accel-amd-gfx90a
module list

#----
#MPI & OpenMP settings
export OMP_NUM_THREADS=1           #This controls the real CPU-cores per task for the executable

#----
#Execution
srun -N 1 -n 1 -c 64 --gpus-per-node=8 --gpus-per-task=gres=gpu:8 ./program


Code Block
languagebash
themeEmacs
titleExample 3 : Eight MPI processes each with a single GPU (use exclusive node access)
linenumberstrue
#!/bin/bash --login

#SBATCH --account=project-gpu
#SBATCH --partition=gpu
#SBATCH --nodes=1              #1 nodes in this example
#SBATCH --exclusive            #All resources of the node are exclusive to this job
#                              #8 GPUs per node (8 "allocation packs" in total for the job)
#SBATCH --time=00:05:00

#----
#Loading needed modules (adapt this for your own purposes):
module load PrgEnv-cray
module load rocm craype-accel-amd-gfx90a
module list

#----
#MPI & OpenMP settings
export MPICH_GPU_SUPPORT_ENABLED=1 #This allows for GPU-aware MPI communication among GPUs
export OMP_NUM_THREADS=1           #This controls the real number of threads per task

#----
#Execution
srun -N 1 -n 8 -c 8 --gpus-per-node=gres=gpu:8 --gpus-per-task=1 --gpu-bind=closest ./program


Note
titleMethod 1 may fail for some applications.

The use of --gpu-bind=closest may not work for all codes. For those codes, "manual" binding may be the only reliable method if they relying OpenMP or OpenACC pragma's for moving data from/to host to/from GPU and attempting to use GPU-to-GPU enabled MPI communication.

Some codes, like {{OpenMM}}, also make use of the runtime environment variables and require explicitly setting ROCR_VISIBLE_DEVICES

Code Block
languagebash
themeEmacs
titleSetting visible devices manually
export ROCR_VISIBLE_DEVICES=0,1 # selects the first two GCDS on GPU 1. 



Full guides

...