Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Most important changes

  • From 2023 allocation round, submissions to NCMAS and Pawsey Partner Schemes can now include Setonix CPU and Setonix GPU requests.
  • Minimum allocation request size is now 1M Service Units.
  • Accounting model for Setonix now includes Service Units definition for both Setonix CPU and GPU partitions.    
  • There is no Pawsey Partner top-up allocations starting from 2023 allocation round. Researchers can apply to both NCMAS and Pawsey Partner Scheme subject to the eligibility and conditions of these schemes.
  • The new Preparatory Access Scheme is now available for researchers who need to port and benchmark their codes and workflows before applying to one of the merit allocation schemes.

...

Column


Table 1. Resources available on Setonix for the 2024 allocation round

Scheme

Request

full year

National Computational Merit Allocation Scheme

Scheme total capacity

485M Service Units Total:

  • 325M Service Units (core hours) on Setonix-CPU
  • 160M Service Units on Setonix-GPU
Minimum request size

1M Service Units

Pawsey Partner Merit Allocation SchemeScheme total capacity

575M Service Units

  • 385M Service Units (core hours) on Setonix-CPU
  • 190M Service Units on Setonix-GPU
Minimum request size1M Service Units

The Accounting Model

With Setonix, Pawsey is moving from an exclusive node usage to a proportional node usage accounting model. While the Service Unit (SU) is still mapped to the hourly usage of CPU cores, users are not charged for whole nodes irrespective of whether they are been fully utilised. With the proportional node usage accounting model, users are charged only for the portion of a node they requested.

Each CPU compute node of Setonix can run multiple jobs in parallel, submitted by a single user or many users, from any project. Sometimes this configuration is called shared access.

A project that has entirely consumed its service units (SUs) for a given quarter of the year will run its jobs in low priority mode, called extra, for that time period. Furthermore, if its service unit consumption for that same quarter hits the 150% usage mark, users of that project will not be able to run any more jobs for that quarter.

Pawsey accounting model bases the GPU charging rate on energy consumption. Such approach, designed for Setonix, has a lot of advantages compared to other models, introducing carbon footprint as a primary driver in determining the allocation of computational workflow on heterogeneous resources.

Pawsey and NCI centres are using slightly different accounting models. Researchers applying for allocations on Setonix and Gadi should refer to Table 2 when calculating their allocation requests. 

Table 2. Setonix and Gadi service unit models

 * calculated based on https://opus.nci.org.au/display/Help/2.2+Job+Cost+Examples for gpuvolta queue

How to estimate request for Setonix-GPU?
Column
Resources usedService Units

Gadi

CPU: 48 Intel Cascade Lake cores per node

GPU: 4 Nvidia V100 GPUs per node

Setonix

CPU: 128 AMD Milan cores per node

GPU: 4 AMD MI250X GPUs per node

1 CPU core / hour21
1 CPU / hour4864
1 CPU node / hour96128
1 GPU / hour36*128
1 GPU node / hour144*512
Info
Info
titleSetonix-GPU migration pathway
title
Service Units

Researchers planning their migration from NVIDIA-based GPU systems like Pawsey’s Topaz and NCI’s Gadi to AMD-based Setonix-GPU should use the following example strategy to calculate their Service Units request.   

  • Simulation walltime on a single NVIDIA V100 GPU: 1h
  • Safe estimate for Service Units usage on a single Setonix’s AMD MI250X GPU: 1h * 1/2 * 128 = 64 Service Units 

Please see: https://www.amd.com/en/graphics/server-accelerators-benchmarks 

The Setonix’s AMD MI250X GPUs have a very specific migration pathway related to CUDA to HIP and OpenACC to OpenMP conversions. Pawsey is working closely with research groups within PaCER project (https://pawsey.org.au/pacer/) and with vendors to further extend the list of supported codes. 

Please see: https://www.amd.com/en/technologies/infinity-hub  


Related pages