Column | ||||
---|---|---|---|---|
Warning | ||||
| Work in Progress for Phase-2 Documentation | The content of this section is currently being updated to provide material relevant for Phase-2 of Setonix and the use of GPUs, which is expected to soon be available to Pawsey projects with Setonix GPU allocations
| ||
Note that Magnus, Zeus and Topaz and associated file systems have been decommissioned and are no longer available. |
Excerpt |
---|
This page is intended to help users of previous Pawsey GPU supercomputing infrastructure (such as Topaz) to transition to using the Setonix supercomputer. |
...
Throughout this guide links are provided to relevant documentation pages in the general Supercomputing Documentation and to the Setonix User Guide, which provides documentation specifically for using GPUs the Setonix system. The Setonix GPU Partition Quick Start also provides specific details for using the GPUs in Setonix.
The This guide has been updated in preparation for the migration of GPU projects on Topaz to Setonix Phase 2.
...
Setonix Phase 2 GPUs replace Pawsey's previous generation of GPU infrastructure, specifically the Topaz GPU cluster and associated filesystems. This migration guide has been updated to outline changes for researchers transitioning from Topaz to Setonix Phase 12.
Significant changes to the GPU compute architecture include:
...
The Setonix operating system and environment will be a newer version of the Cray Linux Environment familiar to users of Magnus and Galaxy. It will also include scheduling features previously provided separately on Zeus and Topaz. This will enable the creation of end-to-end workflows running on Setonix, as detailed in the following sections.
Supercomputing filesystems
Anchor | ||||
---|---|---|---|---|
|
There are several new filesystems that will be available with the Setonix supercomputer.
...
For information specific to Setonix refer to the Filesystems and data management section of the Setonix User Guide.
Loading modules and using containers
Anchor | ||||
---|---|---|---|---|
|
The software environment on Setonix is provided by a module environment very similar to that of the previous supercomputing systems.
...
For containers, researchers can continue to use Singularity in a similar way to previous systems. Some system-wide installations (in particular, for bioinformatics) are now performed as container modules using SHPC: these softwares are installed as containers, but the user interface is the same as for compiled applications (load module, run executables).
There is a library of GPU-enabled containers that support the AMD MI250X GPUs available from the AMD Infinity Hub. Note that these containers may be limited in parallelism to one node, or one GPU, depending on the particular software.
Key changes to the software environment include:
...
For information specific to Setonix refer to the Software Environment section of the Setonix User Guide.
Installing and maintaining your software
Anchor | ||||
---|---|---|---|---|
|
The Setonix supercomputer has a different hardware architecture to previous supercomputing systems, and the compilers and libraries available may have changed or have newer versions. It is strongly recommended that project groups reinstall any necessary domain-specific software. This is also an opportunity for project groups to review the software in use and consider updating to recent versions, which typically contain newer features and improved performance.
...
For information specific to Setonix refer to the Compiling section of the Setonix User Guide.
Submitting and monitoring jobs
Anchor | ||||
---|---|---|---|---|
|
Setonix uses Slurm, which is the same job scheduling system used on the previous generation of supercomputing systems. Previously, several specific types of computational use cases for were supported on Zeus rather than the main petascale supercomputer, Magnus. Such use cases were often used for pre-processing and post-processing. These specialised use cases are now supported on Setonix alongside large scale computational workloads.
...
For information specific to Setonix refer to the Running Jobs section, and particularly for GPU jobs the Example Slurm Batch Scripts for Setonix on CPU GPU Compute Nodes page of the Setonix User guide.
Using data throughout the project
...
lifecycle
Anchor | ||||
---|---|---|---|---|
|
When using Pawey's supercomputing infrastructure, there may be project data that is needed to be available for longer than the 30 day /scratch
purge policy. For example, a reference set of data that is reused across many computational workflows.
...
For more information on using Acacia, refer to the Acacia Early Adopters - User Guide the /wiki/spaces/DATA/pages/54459526 in the Data documentation.
...