PyTorch is an optimised tensor library for deep learning using GPUs and CPUs.
Introduction
PyTorch is the most popular framework to develop Machine Learning and Deep Learning applications. It provides users with building blocks to define neural networks using a variety of predefined layers, activation functions, optimisation algorithms, and utilities to load and store data. It supports GPU acceleration for training and inference on a variety of hardware such as NVIDIA, AMD and Intel GPUs.
PyTorch on Setonix
Setonix can support Deep Learning workloads thanks to the large number of AMD GPUs installed on the system. PyTorch must be compiled from source to make use of the Cray MPI library for distributed training, and a suitable ROCm version to use GPUs. To make it easier for users, Pawsey developed a Docker container for PyTorch. The library has been built with all the necessary dependencies and configuration options to run efficiently on Setonix.
The Docker container is publicly available on the Pawsey repository (external site) on Quay.io. Users can build on top of the container to install additional Python packages. It can be pulled using Docker using the following command:
$ docker pull quay.io/pawsey/pytorch:2.1.2-rocm5.6.0
The container can be also pulled using singularity:
$ singularity pull pytorch.sif docker://quay.io/pawsey/pytorch:2.1.2-rocm5.6.0
An official AMD container is also available but lacks both support for Cray MPI and some core Python packages, making it unusable on Setonix.
The PyTorch container developed by Pawsey is also available on Setonix as a module installed using SHPC.
$ module avail pytorch -------------- /software/setonix/2023.08/containers/views/modules -------------- pytorch/2.1.0-rocm5.6.0