Excerpt |
---|
PyTorch is an optimised tensor library for deep learning using GPUs and CPUs. |
Column | ||||||
---|---|---|---|---|---|---|
|
...
PyTorch is the most popular framework to develop Machine Learning and Deep Learning applications. It provides users with building blocks to define neural networks using a variety of predefined layers, activation functions, optimisation algorithms, and utilities to load and store data. It supports GPU acceleration for training and inference on a variety of hardware such as NVIDIA, AMD and Intel GPUs.
PyTorch installation on Setonix
Setonix can support Deep Learning workloads thanks to the large number of AMD GPUs installed on the system. PyTorch must be compiled from source to make use of the Cray MPI library for distributed training, and a suitable ROCm version to use GPUs. To make it easier for users, Pawsey developed a Docker container for PyTorch. The library has been built with all the necessary dependencies and configuration options to run efficiently on Setonix.
...
$ docker pullĀ quay.io/pawsey/pytorch:2.2.0-rocm5.7.3
The container can be also pulled using singularity:
...
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
Writing PyTorch code for AMD GPUs
To increase portability and to minimise code change, PyTorch implements support for AMD GPUs within the interface initially dedicated only to CUDA. More information atĀ HIP (ROCm) semantics (external site).