Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Column
Note
titleWork in Progress for Phase-2 Documentation

The content of this section is currently being updated to include material relevant for Phase-2 of Setonix and the use of GPUs.
On the other hand, all the existing material related to Phase-1 and the use of CPU compute nodes can be considered safe, valid and up-to-date.

Excerpt

The Pawsey Supercomputing Research Centre provides access to a number of different supercomputing systems for the Australian research and industry communities, as well as international collaborators. This page provides details of these resources, including supercomputing systems and high-performance filesystems.

...

The Pawsey Supercomputing Research Centre operates several supercomputing systems, which physically reside at the Pawsey and are closely integrated with its other infrastructure.

Setonix

Insert excerpt
Setonix User Guide
Setonix User Guide
nopaneltrue

A picture of SetonixImage Added

Key characteristics

  • Phase 1 provides more than 2 PFLOPs of computing power through CPU-only nodes based on the AMD

Zen3
  • EPYC Zen 3 architecture.

  • Phase 2 will extend Setonix to include additional CPU nodes as well as a large GPU node partition based on the AMD Instinct MI250X architecture, for a total of

50
  • 43 PFLOPs of

compute
  • theoretical peak performance (8 PFLOPs in the CPU-only nodes and 35 PFLOPs in the GPU-nodes).

Setonix, has been recognised as one of the greenest supercomputers in the world, after ranking in the top5 on the globally recognised Green500 list. Setonix was also named the most powerful public research supercomputer in the Southern Hemisphere, ranking 15 in the global Top500 ranking as well as 10 in the HPL Mixed-Precision Benchmark in November 2022.

 

More information at Setonix

User Guide.
Section
Column
width650px

A picture of SetonixImage Removed

Column

Topaz

...

Column

Key characteristics

  • An NVIDIA GPU cluster for accelerated workflows.
  • 22 compute nodes with 2 Volta GPUs each.
  • 11 compute nodes, each with 4 P100 interconnected with the NVLINK technology.

Read more at Topaz User Guide.

Column
width250px

The Topaz clusterImage Removed

Garrawarla

...

User Guide

...

Column
width450px

Image Removed

Column

Key characteristics

  • 78 nodes each with 40 Intel CPU cores and one NVIDIA V100 GPU.
  • 756 TFLOPs of computing power.
  • Dedicated to the MWA organisation.

More information at /wiki/spaces/US/pages/51931262

Other systems

The following systems are in the process of being decommissioned and new users won't be able to access them.

...

.

All of these systems physically reside at the Pawsey Supercomputing Centre and are closely integrated with other Pawsey infrastructure.


Filesystems

There are a number of filesystems mounted by Pawsey supercomputers:

...

/astro is a filesystem that supports the operation of the MWA radio telescopes.

While Pawsey is migrating from old systems to the new one, Setonix, the following points stand true:

  • The /group filesystem is being decommissioned. It is a mid-tier filesystem intended for actively used files needed for the length of a project. This will be available on Topaz until its decommission, currently on Magnus in read-only mode, and on the data mover nodes of Setonix. All projects should store data on Acacia when using Setonix.
  • Setonix will mount a different /home  and /scratch filesystem than Garrawarla and Topaz. Setonix will have the legacy /home  mounted as /oldhome and /group  as /oldgroup, on the data mover nodes, during migration
    • .
    Eventually, Topaz will be decommissioned. 
    Panel
    titleRadioastronomy HPC filesystems
    • /askapbuffer is a filesystem that supports the operation of the ASKAP radio telescopes.
    Column
    Note
    iconfalse
    titleMigration: Filesystems


    For more information about filesystems and file management, see File Management.

    ...

    Each project and each user have access to a long-term storage solution for their data implemented via the object storage Acacia. Supercomputing users transfer data to and from Acacia using dedicated commands already installed on supercomputers. For more information, visit visit Acacia - IntroductionUser Guide .

    HPC software

    Pawsey installs and supports a range of software packages, which can be managed and accessed through the Modules system. Additional software can be installed by users through Spack or manual builds or by means of Containers. For more information, visit the Software Stack page.

    ...