NCMAS

NCMAS

The National Computational Merit Allocation Scheme (NCMAS) is Australia’s premier meritorious allocation scheme, spanning both national peak facilities as well as specialised compute facilities across the nation.

The NCMAS is open to the Australian research community, providing significant amounts of compute time for meritorious research projects.

The NCMAS is administered by the NCMAS secretariat.  

Please find below the link to the NCMAS application portal. Please refer to the official NCMAS communication about the opening and closing dates of the call. 

Further information is available at https://ncmas.nci.org.au

 

Milestone

NCMAS

Milestone

NCMAS

Call Open

8 October 2025

Call Close

31 October 2025

Submissions Portal

https://my.nci.org.au/mancini/ncmas/2026/ 

Committee Meetings

11-12 December 2025

Applicants Notified

TBC

Available Resources in 2026


The NCMAS is one of the Merit Allocation Schemes available on Setonix. Researchers can apply for allocations on Setonix-CPU, Setonix-GPU, and Setonix-Q Pilot. Setonix-Q Pilot is a pilot scheme providing access to both classical computing resources for quantum computing simulation and access to Quantum Computers (QPUs) available through AWS Braket.  

Resources available and minimum allocation sizes are presented in Table 1. 


Table 1. Available resources and minimum request size

Scheme

Request

full year

Scheme

Request

full year

National Computational Merit Allocation Scheme

Scheme total capacity

500M Service Units Total:

  • 325M Service Units (core hours) on Setonix-CPU

  • 160M Service Units on Setonix-GPU

  • 15M Service Units on Setonix-Q Pilot

Minimum request size

  • 1M Service Units for Setonix-CPU and Setonix-GPU combined

  • 1M Service Units for Setonix-Q Pilot

For Setonix-CPU and Setonix-GPU, there is no maximum limit to the amount of time that can be requested. However, partial allocations may be awarded depending on the availability and demand for allocations within the scheme and the decisions of the NCMAS Scientific Assessment Committee. Note that 1M core hours in a year is approximately the equivalent of using a single Setonix CPU node. Applications for such small allocations must specify why access to a supercomputer is necessary for the research, and based on the scoring criteria below such uses of the supercomputer are unlikely to be competitive against other applications that demonstrate they need the expensive interconnect. 

For Setonix-Q Pilot, the minimum allocation is 1M SUs. All Setonix-Q Pilot allocations will consist of:

  • 80% of the allocation provisioned on classical resources (NVIDIA GH200 nodes on Setonix) available for quantum computing simulations, and

  • 20% of the allocation provisioned on the Pawsey Quantum Hub portal which provides access to AWS Braket. 

Example: Allocation of 2M SUs on Setonix-Q Pilot will consist of 1.6M SUs on classical resources for quantum computing simulations, and 0.4M SUs on AWS Braket.

Please see below for explanation of SUs on Setonix-CPU, Setonix-GPU and Setonix-Q Pilot.

Other non-Pawsey resources are available under the NCMAS including NCI's Gadi supercomputer.

How to estimate Service Units request for Setonix-GPU?

Researchers planning their migration from NVIDIA-based GPU systems like NCI’s Gadi to AMD-based Setonix-GPU should use the following example strategy to calculate their Service Units request.   

  • Simulation walltime on a single NVIDIA V100 GPU: 1h

  • Safe estimate for Service Units usage on a single Setonix’s AMD MI250X GCD: 1h * 1/2 * 128 = 64 Service Units (note: each AMD MI250X GPU has 2 GCDs for the total of 8 GCDs per node) 

Please see: https://www.amd.com/en/graphics/server-accelerators-benchmarks 

 Setonix-Q GH200 architecture

The Setonix’s NVIDIA GH200 superchips use a different CPU architecture, specifically ARM rather than x86, and thus the software stack is completely separate from the Setonix-CPU and Setonix-GPU partitions. Codes must be compiled on compute nodes to ensure architecture compatibility. GH200's also have a unified memory architecture that might require a rewrite or recompilation to ensure optimal efficiency.  

Accounting Model


With Setonix, Pawsey is moving from an exclusive node usage to a proportional node usage accounting model. While the Service Unit (SU) is still mapped to the hourly usage of CPU cores, users are not charged for whole nodes irrespective of whether they are been fully utilised. With the proportional node usage accounting model, users are charged only for the portion of a node they requested.

Each CPU compute node of Setonix can run multiple jobs in parallel, submitted by a single user or many users, from any project. Sometimes this configuration is called shared access.

For Setonix-CPU, Setonix-GPU and Setonix-Q Classical, a project that has entirely consumed its service units (SUs) for a given quarter of the year will run its jobs in low priority mode, called extra, for that time period.

Pawsey accounting model bases the GPU charging rate on energy consumption. Such approach, designed for Setonix, has a lot of advantages compared to other models, introducing carbon footprint as a primary driver in determining the allocation of computational workflow on heterogeneous resources.

Pawsey and NCI centres are using slightly different accounting models. Researchers applying for allocations on Setonix and Gadi should refer to Table 2 when calculating their allocation requests. 

The budget of Setonix-Q Pilot is different from other schemes as it is explicitly broken into classical and quantum components.

  • Classical component of the allocation provides access to the Setonix-Q Pilot NVIDIA GH200 nodes and follows Setonix-CPU/GPU for how current and total budgets are allocated, queried and tracked. Allocations are split quarterly and usage can be viewed using Origin. Like Setonix-CPU/GPU, usage can exceed 100% with jobs running at a lower priority. The resource cost per SU also follows the energy-based charges of Setonix-GPU.

  • Quantum component of the allocation is provided for the entire year but usage cannot exceed 100%. Once consumed, members of a project will no longer be able to run any Quantum Computing jobs through the Pawsey Quantum Hub portal which provides access to AWS Braket. The allocation and use is tracked using the projects dashboard located at the hub.quantum.pawsey.org.au . Additionally, the SU cost is variable, depending on QPU device used and can change without notice. We will endeavour to communicate changes but cannot provide advanced warning of SU changes.

Table 2. Setonix and Gadi service unit models

Resources used

Service Units

Gadi

CPU: 48 Intel Cascade Lake cores per node

GPU: 4 Nvidia V100 GPUs per node

Setonix

CPU: 128 AMD Milan cores per node

GPU: 4 AMD MI250X GPUs per node

Q: 4 NVIDIA GH200 per node

1 CPU core / hour

2

1

1 CPU / hour

48

64

1 CPU node / hour

96

128

1 GPU / hour

36*

128

1 GPU node / hour

144*

512

1 Setonix-Q GPU / hour

 

256

1 Setonix-Q GPU node / hour

 

1024

100 shots on AWS QPUs 

 

16-342 (more information in Table 3)

 * calculated based on https://opus.nci.org.au/display/Help/2.2+Job+Cost+Examples for gpuvolta queue

Table 3. Devices on AWS Braket have different charging models and costs per shot (status as of August 2025)

Device 

Number of Qubits

Type

SU cost of 100 shots

Device 

Number of Qubits

Type

SU cost of 100 shots

IonQ Aria1, Aria2

25

Universal gate-model ion-trap QPU with error mitigation support

232

IonQ Forte1

36

Universal gate-model ion-trap QPU with error mitigation support

342

IQM Garnet

20

Universal gate-model superconducting QPU

18

IQM Emerald

56

Universal gate-model superconducting QPU

19

QuEra Aquila

256

Analog neutral atom-based configurable QPU

52

Rigetti Ankaa-3

82

Universal gate-model superconducting QPU

16

Assessment Criteria


Criterion 1: Project quality and innovation

  • Significance of the research

  • Originality and innovative nature of the computational framework

  • Advancement of knowledge through the goals of the proposed research

  • Potential for the research to contribute to Australian science, research and innovation priorities

Criterion 2: Investigator records

  • Research record and performance relative to opportunity (publications, research funding, recognition and esteem metrics)

Criterion 3: Computational feasibility

  • Adequacy of the time commitment of investigators to undertake the research and utilise the resources successfully

  • Suitability of the system to support the research, and appropriate and efficient use of the system

  • Capacity to realise the goals of the project within the resources request

  • Appropriate track record in the use of high-performance computing systems, relative to the scale of the resources requested

Criterion 4: Benefit and impact

  • The ability of the project to generate impactful outcomes and produce innovative economic, environmental and social benefits to Australia and the international community

Data Storage and Management 


By default, each supercomputing project will be allocated project storage of 0.5 terabyte on Acacia. Additional space can be requested through Pawsey Partner submission form or Pawsey’s Help Desk and will be provided subject to availability. In line with Pawsey's Data Storage and Management Policy, data will normally only be held by the Pawsey Supercomputing Centre for the duration of the research project. In addition, researchers can apply for managed storage allocations, separately from NCMAS. Managed storage access is intended for storing larger data collections with demonstrable research value according to a curated lifecycle plan.

Application Form and Process


Applications to the NCMAS are via online form at the NCMAS website, https://ncmas.nci.org.au

No Partner Top-up Allocations


There will be no Pawsey Partner top-up allocations starting from 2023 allocation round. Researchers can apply to both NCMAS and Pawsey Partner Scheme subject to the eligibility and conditions of these schemes.  

External links

Related pages