NCMAS


The National Computational Merit Allocation Scheme (NCMAS) is Australia’s premier meritorious allocation scheme, spanning both national peak facilities as well as specialised compute facilities across the nation.

The NCMAS is open to the Australian research community, providing significant amounts of compute time for meritorious research projects.

The NCMAS is administered by the NCMAS secretariat.  

Please find below the link to the NCMAS application portal. Please refer to the official NCMAS communication about the opening and closing dates of the call. 

Further information is available at https://ncmas.nci.org.au.


MilestoneNCMAS

Call Open

TBA
Submissions Portalhttps://my.nci.org.au/mancini/ncmas/2024/

Committee Meetings

TBA

Applicants Notified

TBA

Available Resources in 2025


The NCMAS is one of the Merit Allocation Schemes available on Setonix. Researchers can apply for allocations on Setonix CPU and Setonix GPU. 

Resources available and minimum allocation sizes are presented in table 1. 


Table 1. Available resources and minimum request size

Scheme

Request

full year

National Computational Merit Allocation Scheme

Scheme total capacity

520M Service Units Total:

  • 350M Service Units (core hours) on Setonix-CPU
  • 170M Service Units on Setonix-GPU
Minimum request size1M Service Units

There is no maximum limit to the amount of time that can be requested. However, partial allocations may be awarded depending on the availability and demand for allocations within the scheme.

Note that 1M core hours in a year is approximately the equivalent of using a single Setonix CPU node. Applications for such small allocations must specify why access to a supercomputer is necessary for the research, and based on the scoring criteria below such uses of the supercomputer are unlikely to be competitive against other applications that demonstrate they need the expensive interconnect. 

Other non-Pawsey resources are available under the NCMAS including NCI's Gadi supercomputer.


How to estimate Service Units request for Setonix-GPU?

Researchers planning their migration from NVIDIA-based GPU systems like NCI’s Gadi to AMD-based Setonix-GPU should use the following example strategy to calculate their Service Units request.   

  • Simulation walltime on a single NVIDIA V100 GPU: 1h
  • Safe estimate for Service Units usage on a single Setonix’s AMD MI250X GCD: 1h * 1/2 * 128 = 64 Service Units (note: each AMD MI250X GPU has 2 GCDs for the total of 8 GCDs per node) 

Please see: https://www.amd.com/en/graphics/server-accelerators-benchmarks 

Setonix-GPU migration pathway

The Setonix’s AMD MI250X GPUs have a very specific migration pathway related to CUDA to HIP and OpenACC to OpenMP conversions. Pawsey is working closely with research groups within PaCER project (https://pawsey.org.au/pacer/) and with vendors to further extend the list of supported codes. 

Please see: https://www.amd.com/en/technologies/infinity-hub  

Accounting Model


With Setonix, Pawsey is moving from an exclusive node usage to a proportional node usage accounting model. While the Service Unit (SU) is still mapped to the hourly usage of CPU cores, users are not charged for whole nodes irrespective of whether they are been fully utilised. With the proportional node usage accounting model, users are charged only for the portion of a node they requested.

Each CPU compute node of Setonix can run multiple jobs in parallel, submitted by a single user or many users, from any project. Sometimes this configuration is called shared access.

A project that has entirely consumed its service units (SUs) for a given quarter of the year will run its jobs in low priority mode, called extra, for that time period. Furthermore, if its service unit consumption for that same quarter hits the 150% usage mark, users of that project will not be able to run any more jobs for that quarter.

Pawsey accounting model bases the GPU charging rate on energy consumption. Such approach, designed for Setonix, has a lot of advantages compared to other models, introducing carbon footprint as a primary driver in determining the allocation of computational workflow on heterogeneous resources.

Pawsey and NCI centres are using slightly different accounting models. Researchers applying for allocations on Setonix and Gadi should refer to Table 2 when calculating their allocation requests. 


Table 2. Setonix and Gadi service unit models

Resources usedService Units

Gadi

CPU: 48 Intel Cascade Lake cores per node

GPU: 4 Nvidia V100 GPUs per node

Setonix

CPU: 128 AMD Milan cores per node

GPU: 4 AMD MI250X GPUs per node

1 CPU core / hour21
1 CPU / hour4864
1 CPU node / hour96128
1 GPU / hour36*128
1 GPU node / hour144*512

 * calculated based on https://opus.nci.org.au/display/Help/2.2+Job+Cost+Examples for gpuvolta queue

Assessment Criteria


Criterion 1: Project quality and innovation

  • Significance of the research
  • Originality and innovative nature of the computational framework
  • Advancement of knowledge through the goals of the proposed research
  • Potential for the research to contribute to Australian science, research and innovation priorities

Criterion 2: Investigator records

  • Research record and performance relative to opportunity (publications, research funding, recognition and esteem metrics)

Criterion 3: Computational feasibility

  • Adequacy of the time commitment of investigators to undertake the research and utilise the resources successfully
  • Suitability of the system to support the research, and appropriate and efficient use of the system
  • Capacity to realise the goals of the project within the resources request
  • Appropriate track record in the use of high-performance computing systems, relative to the scale of the resources requested

Criterion 4: Benefit and impact

  • The ability of the project to generate impactful outcomes and produce innovative economic, environmental and social benefits to Australia and the international community

Data Storage and Management 


Each project will be allocated project storage of 1TB by default on Pawsey’s object storage system Acacia. Project storage allocations are limited to the duration of compute allocation. In line with Pawsey's Data Storage and Management Policy, data will normally only be held by the Pawsey Supercomputing Centre for the duration of the research project. In addition, researchers can apply for managed storage allocations, separately from NCMAS. Managed storage access is intended for storing larger data collections with demonstrable research value according to a curated lifecycle plan.

Application Form and Process


Applications to the NCMAS are via online form at the NCMAS website, https://ncmas.nci.org.au

No Partner Top-up Allocations


There will be no Pawsey Partner top-up allocations starting from 2023 allocation round. Researchers can apply to both NCMAS and Pawsey Partner Scheme subject to the eligibility and conditions of these schemes.  

External links

Related pages