Setonix Migration Guide

Warning

Note that Magnus, Zeus and Topaz and associated file systems have been decommissioned and are no longer available.

This page is intended to help users of previous Pawsey GPU supercomputing infrastructure (such as Topaz) to transition to using the Setonix supercomputer.

Contents

This migration guide focuses on the changes to and additional features of the new system, and assumes the reader is familiar with working with supercomputers. The Supercomputing Documentation provides background context to important supercomputing concepts, which may be helpful if you are getting started with using supercomputers for the first time.

Throughout this guide links are provided to relevant documentation pages in the general Supercomputing Documentation and to the Setonix User Guide, which provides documentation specifically for using GPUs the Setonix system. The Setonix GPU Partition Quick Start also provides specific details for using the GPUs in Setonix.

This guide has been updated in preparation for the migration of GPU projects on Topaz to Setonix Phase 2.

Starting with Setonix

Setonix is the new petascale supercomputer at the Pawsey Supercomputing Centre, ranking 15th in the world for performance in the Top500 list and ranking 4th Green500 list for energy efficiency.

It arrived in two phases:

  • Setonix Phase 1: HPE Cray EX supercomputer initially with 504 AMD CPU nodes, available for merit projects in 2022.
  • Setonix Phase 2: An expanded system with 1600 CPU nodes and 192 GPU nodes, available to merit projects in 2023.

Setonix Phase 2 GPUs replace Pawsey's previous generation of GPU infrastructure, specifically the Topaz GPU cluster and associated filesystems. This migration guide has been updated to outline changes for researchers transitioning from Topaz to Setonix Phase 2.

Significant changes to the GPU compute architecture include:

  • Moving from 16 or 28 cores per node on Intel host CPUs to 64 cores per node on the AMD host CPU.
  • Changing from 192 GB to 256 GB of RAM per node in some cases.
  • Transition from using NVIDIA V100 and P100 GPUs to AMD MI250x GPUs.

For more details refer to the System overview section of the Setonix User Guide.​

The Setonix operating system and environment will be a newer version of the Cray Linux Environment familiar to users of Magnus and Galaxy. It will also include scheduling features previously provided separately on Zeus and Topaz. This will enable the creation of end-to-end workflows running on Setonix, as detailed in the following sections.

Supercomputing filesystems 

There are several new filesystems that will be available with the Setonix supercomputer.

  • The previous /scratch filesystem is replaced by a new 14 petabyte /scratch filesystem.
  • The previous /home filesystem is replaced by a new /home filesystem.
  • For software and job scripts, the previous /pawsey and /group filesystems are replaced by a new /software filesystem.
  • For project data, the previous /group filesystem is replaced by the Acacia object store.


These new filesystems will have the following limits:

FilesystemTimeCapacityFile Count
/homeDuration of project(s)1 GB per person10,000 files per person
/softwareDuration of project256 GB per project100,000 files per project
/scratch30 days per file1 PB per project1,000,000 files per project

For more information on Pawsey filesystems refer to the File Management page.

For information specific to Setonix refer to the Filesystems and data management section of the Setonix User Guide.​

Loading modules and using containers 

The software environment on Setonix is provided by a module environment very similar to that of the previous supercomputing systems.

The module environment is provided by Lmod, which was used previously on Topaz.

Setonix has a newer version of the Cray Linux Environment that was present on Magnus and Galaxy, which used programming environment modules to select the compilation environment.

For containers, researchers can continue to use Singularity in a similar way to previous systems. Some system-wide installations (in particular, for bioinformatics) are now performed as container modules using SHPC: these softwares are installed as containers, but the user interface is the same as for compiled applications (load module, run executables).

There is a library of GPU-enabled containers that support the AMD MI250X GPUs available from the AMD Infinity Hub. Note that these containers may be limited in parallelism to one node, or one GPU, depending on the particular software.

Key changes to the software environment include:

  • Lmod is used to provide modules in place of environment modules.
  • Module versions should be specified when working with modules.
  • The PrgEnv-gnu programing environment is the default.

Refer to the Software Stack pages for more detail on using modules and containers.

For information specific to Setonix refer to the Software Environment section of the Setonix User Guide.​

Installing and maintaining your software 

The Setonix supercomputer has a different hardware architecture to previous supercomputing systems, and the compilers and libraries available may have changed or have newer versions. It is strongly recommended that project groups reinstall any necessary domain-specific software. This is also an opportunity for project groups to review the software in use and consider updating to recent versions, which typically contain newer features and improved performance.

Due to the change from NVIDIA GPUs on Topaz to AMD GPUs on Setonix, AMD's ROCm and HIP technologies should be used instead of CUDA, and are provided via the rocm module.

Key changes to software installation and maintenance on Setonix include:

  • The new processor architecture has seen the Intel programming environment (PrgEnv-intel) replaced by an AMD programming environment (PrgEnv-aocc).
  • The GNU programming environment has newer versions of the gcc, g++ and gfortran compilers, and is the default environment on Setonix.
  • The Cray programming environment has newer versions of the Cray C/C++ and Fortran compilers.
  • The newer Cray C/C++ is now based on the Clang back end, and the command line options have changed accordingly.
  • Pawsey has adopted Spack for assisted software installation, which may also be useful for project groups to install their own software.
  • Pawsey has adopted SHPC to deploy some applications (particularly bioinformatics packages) as container modules, which may also be useful for some project groups.
  • ROCm and HIP should be used in place of CUDA for GPU-acceleration. For more information see Porting Cuda Codes to HIP

Refer to How to Install Software and SHPC (Singularity Registry HPC) in the Supercomputing Documentation for more detail.

For information specific to Setonix refer to the Compiling section of the Setonix User Guide. 

Submitting and monitoring jobs 

Setonix uses Slurm, which is the same job scheduling system used on the previous generation of supercomputing systems. Previously, several specific types of computational use cases for were supported on Zeus rather than the main petascale supercomputer, Magnus. Such use cases were often used for pre-processing and post-processing. These specialised use cases are now supported on Setonix alongside large scale computational workloads.

Note that separate GPU allocations are used for GPU jobs, which are similar to the usual project name with a -gpu suffix. These GPU allocations are only used for submitting and managing GPU allocations in Slurm. Software installations and working data for GPU jobs still use the directories and file systems associated with the base project name.

Key changes

  • Jobs may share nodes, allowing jobs to request a portion of the cores and memory available on the node. 
  • Jobs can still specify exclusive node access where necessary.​
  • A partition for longer running jobs is available.
  • Nodes with additional memory are available.
  • A partition for data transfer jobs is available.
  • Job dependencies can be used to combine data transfer and computational jobs to create automated workflows.
  • Partitions for GPU jobs are available.
  • Separate GPU allocations are used to schedule GPU jobs.

For more information refer to Job Scheduling in the Supercomputing documentation.

For information specific to Setonix refer to the Running Jobs section, and particularly for GPU jobs the Example Slurm Batch Scripts for Setonix on GPU Compute Nodes page of the Setonix User guide.

Using data throughout the project lifecycle 

When using Pawey's supercomputing infrastructure, there may be project data that is needed to be available for longer than the 30 day /scratch purge policy. For example, a reference set of data that is reused across many computational workflows.

On previous supercomputing systems, such as Topaz, the /group filesystem was used to provide this functionality.

For Setonix, this functionality is provided by the Acacia object storage system.

Key changes include:

  • Jobs should be submitted to the data mover nodes to stage existing project data from Acacia to /scratch if needed at the start of computational workflows.
  • Jobs should be submitted to the data mover nodes to store new project data from /scratch to Acacia if needed following computational jobs.
  • Job dependencies should be used to combine these data movement jobs with computational jobs to create end-to-end automated workflows.

For more information on using Acacia, refer to the /wiki/spaces/DATA/pages/54459526 in the Data documentation.

For more information on job dependencies, refer to Example Workflows in the Supercomputing documentation.

Planning your migration

Consider the following steps when planning the migration of your computational workflow:

  1. Log in to Setonix for the first time.
  2. Transfer data on to the new filesystems:
    • Working data should be placed in the new /scratch filesystem.
    • Project data should be placed in the Acacia object store.
  3. Familiarise yourself with the available modules and versions available on Setonix.
  4. Install any additional required domain-specific software for yourself or your project group using the /software filesystem.
  5. Prepare job scripts for each step of computational workflows by keeping templates in the /software filesystem, including:
    1. Staging of data from the Acacia object storage or external repositories using the data mover nodes
    2. Pre-processing or initial computational jobs using appropriate partitions
    3. Computationally significant jobs using appropriate partitions
    4. Post-processing or visualisation jobs using appropriate partitions
    5. Transfer of data products from /scratch to the Acacia object store or other external data repositories
  6. Submit workflows to the scheduler, either manually using scripts or through workflow managers.

Related pages