/
Using ProteinMPNN on AMD GPUs at Pawsey

Using ProteinMPNN on AMD GPUs at Pawsey

Installation

  1. Clone the ProteinMPNN repository into your software directory:

cd $MYSOFTWARE git clone https://github.com/dauparas/ProteinMPNN

Dependencies

ProteinMPNN requires PyTorch, which is available as an optimized system-wide module on Pawsey. To use it:

  1. Check available PyTorch versions:

module avail pytorch
  1. Load the appropriate PyTorch module:

module load pytorch/2.2.0-rocm5.7.3

Note: No additional conda environment setup is required as all dependencies are handled by the system module.

Running ProteinMPNN on Pawsey GPUs

SLURM Job Script Template

Create a job script (e.g., run_proteinmpnn.sh) with the following configuration. In particular, note the loading of the pytorch module and adding the correct srun parameters before each python task.

Important Parameters

  • --partition=gpu: Specifies the GPU partition

  • --nodes=1: Number of nodes to use

  • --gres=gpu:1: Requests one GPU

  • --account=${PAWSEY_PROJECT}-gpu: Your project's GPU account

  • --time=1:00:00: Job time limit (adjust as needed)

Running Tasks

Each Python task in the script uses srun with specific GPU parameters:

  • -N 1: One node

  • -n 1: One task

  • -c 8: 8 CPU cores per task

  • --gres=gpu:1: One GPU per task

  • --gpus-per-task=1: One GPU per task

  • --gpu-bind=closest: Optimal GPU-CPU binding

Submit the Job

Submit your job to the SLURM scheduler:

Check job status:

Further Reading

For more details on running GPU workflows on Setonix, refer to Setonix GPU Partition Quick Start

Related content

How to Interact with Multiple GPUs using TensorFlow
How to Interact with Multiple GPUs using TensorFlow
More like this
How to Use Horovod for Distributed Training in Parallel using TensorFlow
How to Use Horovod for Distributed Training in Parallel using TensorFlow
More like this
How to Run JupyterLab via Conda
How to Run JupyterLab via Conda
More like this
TensorFlow
More like this