Skip to end of banner
Go to start of banner

Guides per Supercomputer

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Pawsey manages a number of computing systems that differ in hardware, software and purpose. This page collects information about our current and past systems.

On this Page:

Overview

Pawsey hosts several supercomputing systems and clusters to meet the needs of the diverse pool of users. Because of the continuously evolving landscape of supercomputing and the research world, systems get commissioned and decommissioned on a regular basis. For this reason it is quite common for a centre to have multiple systems running at the same time. Each system has a different purpose and may target a different user base.

Under this page is a set of user guides or system-specific pages, one for each system. In them you will find information about system-specific configurations, supported software and hardware capabilities. The other pages under Supercomputing Documentation provide information that is generally applicable across systems.

Current systems

There are the systems that Pawsey currently operates and that users can request access to.

  • Setonix: Setonix is the flagship supercomputer of the Pawsey Supercomputing Research Centre, and is based on the HPE Cray EX architecture. It ranked as the 15th largest supercomputer in the world on the Top500 list, and the 4th most energy efficient supercomputer in the world on the Green500 list in November 2022. A significant portion of Setonix's processing capability is due to its AMD MI250X GPUs.
  • Topaz:
    Error rendering macro 'excerpt-include' : No link could be created for 'Topaz User Guide'.

  • Garrawarla: The Pawsey Supercomputing Centre has installed a new GPU-enabled system called Garrawarla, a Wajarri word meaning Spider, to enable our Murchison Wide Field Array (MWA) researchers to produce scientific outcomes while the Pawsey Supercomputing System is being procured. This MWA compute cluster provides the latest generation of CPUs and GPUs, high memory bandwidth, and increased memory per node to allow MWA researchers to effectively process large datasets.

Guides

Pages in this section:

Past systems

For reference and record-keeping, here is the list of decommissioned systems that Pawsey operated in the past.

Magnus

Activity period: 2013 - 2022

Magnus was a Cray XC40 supercomputer, a massively parallel architecture consisting of 1,488 individual nodes that were connected via the Dragonfly network topology by a high-speed network, the HPC-optimised Aries interconnect. Each compute blade had a single Aries ASIC, which provides about 72Gbits/sec for each of the 4 nodes on the blade. All compute nodes in Magnus had the same architecture. Each included two Intel Xeon E5-2690 v3 (Haswell) 12-core CPUs for a total of 24 cores per node, providing a total of 35,712 cores for the entire system. Each node had 64 GB of DDR4 memory shared between 24 cores. Each core had 32KB instruction and data caches, and one 256KB L2 cache; 12 cores (per socket, or NUMA region) share one 30 MB L3 cache. In total, the system had 93 terabytes of memory.

Galaxy

Activity period: 2013 - 2022

Galaxy was a Cray XC30 supercomputer. It was only available for radio-astronomy-focused operations. In particular, it was used to support ASKAP and MWA, which are two of the Square Kilometre Array precursor projects currently underway in the northwest of Western Australia. For ASKAP, Galaxy acted as a real-time computer, allowing direct processing of data delivered to the Pawsey Centre from the Murchison Radio Observatory.

Zeus

Activity period: 2014 - 2022

Zeus was an HPE Linux cluster, containing different types of CPU nodes to support different types of computational workflows:

  • For high-throughput workflows requiring large numbers of jobs that each use a modest number of cores:
    An 80-node partition, workq, was available. Each node had two Intel Xeon E5-2680 v4 2.4 GHz (Broadwell) 14-core CPUs and 128 GB of RAM.
  • For long-running workflows requiring several days to complete:
    An 8-node partition, longq was also available for computationally intensive jobs with wall times up to 4 days. Each node had 28 cores with 128 GB of RAM.
  • For large memory workflows requiring more than 128 GB of RAM:
    A 6-node partition, higmemq, was available for memory intensive computational jobs with 1TB of RAM each. Each node had 16 cores with 1TB of RAM.
  • For debugging and development work:
    An 8-node partition, debugq, was available for code development and prototyping work. Each node had 28 cores with 128 GB of RAM.
  • For data transfer jobs:
    An 8-node partition, copyq, was available for copying and transferring data. Each node contained two Intel Xeon E5-2650 (Sandy Bridge) 8-core CPUs and 64 GB of RAM.

Related pages


  • No labels