Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The default ROCm installation is rocm/5.02.23  provided by HPE Cray. In addition, Pawsey staff have installed the more recent rocm/5.4.3  from source using ROCm-from-source. It is an experimental installation and users might encounter compilation or linking errors. You are encouraged to explore it during development and to report any issues. For production jobs, however, we currently recommend using rocm/5.02.23.

Submitting Jobs

You can submit GPU jobs to the gpu, gpu-dev and gpu-highmem Slurm partitions using your GPU allocation.

...

If you are using ROCm libraries, such as rocFFT, to offload computations to GPUs, you should be able to use any compiler to link those to your code.

For HIP code as well as one use hipcc. And, for code making use of OpenMP offloading, you must use:

  • hipcc

...

  • for c/c++
  • ftn (wrapper for cray-fortran from PrgEnv-cray) for fortran. This compiler also allows GPU offloading with OpenACC.

When using hipcc, note that the location of the MPI headers and libraries that are usually not automatically included by (contrary to the automatic inclusion when using the Cray wrapper scripts must also be provided ti ). Therefore, if your code also requires MPI, the location of the MPI headers and libraries must be provided to hipcc as well as the GPU Transport Layer libraries:

...

Column
width900px


Code Block
languagebash
themeEmacs
titleMPI environment variable for GPU-GPU communication
export MPICH_GPU_SUPPORT_ENABLED=1

...



Accounting

Each MI250X GCD, which corresponds to a Slurm GPU, is charged 64 SU per hour. This means the use of an entire GPU node is charged 512 SU per hour. In general, a job is charged the largest proportion of core, memory, or GPU usage rounded up to 1/8ths of a node (corresponding to an individual MI250X GCD). Note that GPU node usage is accounted against GPU allocations with the -gpu suffix, which are separate to CPU allocations.

...