...
Excerpt |
---|
name | SetonixPartitionsTable |
---|
|
Column |
---|
Name | N. Nodes | Cores per node | Available node-RAM for jobs | GPU chiplets per node |
---|
Type that supportssupported | Max Number of Nodes per Job | Max Wall time | Max Number of Concurrent Jobs per User | Max Number of Jobs Submitted per User |
---|
work | 1376 | 2x 64 | 230 GB | n/a | Supports CPU-based production jobs. | - | 24h | 256 | 1024 | long | 8 | 2x 64 | 230 GB | n/a | Long-running CPU-based production jobs. | 1 | 96h | 4 | 96 | highmem | 8 | 2x 64 | 980 GB | n/a | Supports CPU-based production jobs that require a large amount of memory. | 1 | 96h | 2 | 96 | debug | 8 | 2x 64 | 230 GB | n/a | Development Exclusive for development and debugging of CPU | - code and workflows. | 4 | 1h | 1 | 4 | gpu | 124 | 1x 64 | 230 GB | 8 | Supports GPU-based production jobs. | - | 24h | - | - | gpu-highmem | 38 | 1x 64 | 460 GB | 8 | Supports GPU-based production jobs requiring large amount of host memory. | - | 24h | - | - | gpu-dev | 20 | 1x 64 | 230 GB | 8 | Supports GPU software Exclusive for development and debugging of GPU | jobscode and workflows. | - | 4h | - | - | copy | 7 | 1x 32 | 115 GB | n/a | Copy of large data to and from the supercomputer's filesystems. | - | 48h | 4 | 2048 | askaprt | 180 | 2x 64 | 230 GB | n/a | Dedicated to the ASKAP project (similar to work partition) | - | 24h | 8192 | 8192 | casda | 1 | 1x 32 | 115 GB | n/a | Dedicated to the CASDA project (similar to copy partition) | - | 24h | 30 | 40 | mwa | 10 | 2x 64 | 230 GB | n/a | Dedicated to the MWA projects (similar to work partition) | - | 24h | 1000 | 2000 | mwa-asvo | 10 | 2x 64 | 230 GB | n/a | Dedicated to the MWA projects (similar to work partition) | - | 24h | 1000 | 2000 | mwa-gpu | 10 | 1x 64 | 230 GB | 8 | Dedicated to the MWA projects (similar to gpu partition) | - | 24h | 1000 | 2000 | mwa-asvocopy | 2 | 1x 32 | 115 GB | n/a | Dedicated to the MWA projects (similar to copy partition) | - | 48h | 32 | 1000 |
|
|
Table 2. Quality of Service levels applicable to a Slurm job running on Setonix
Name | Priority Level | Description |
---|
lowest | 0 | Reserved for particular cases. |
low | 3000 | Priority for jobs past the 100% allocation usage. |
normal | 10000 | The default priority for production jobs. |
high | 14000 | Priority boost available to all projects for a fraction (10%) of their allocation. |
highest | 20000 | Assigned to jobs that are of critical interest (e.g. project part of the national response to an emergency). |
exhausted | 0 | QoS for jobs for projects that have consumed more than 150% of their allocation. |
Debug and Development Partitions Policy
Insert excerpt |
---|
| Job Scheduling and Partitions Use Policies |
---|
| Job Scheduling and Partitions Use Policies |
---|
name | Debug and Development Partitions Policy |
---|
nopanel | true |
---|
|
Job Queue Limits
Users can check the limits on the maximum number of jobs that users can run at a time (i.e., MaxJobs
) and the maximum number of jobs that can be submitted(i.e., MaxSubmitJobs
) for each partition on Setonix using the command:
...
Section |
---|
Panel |
---|
title | Subpages in this section: |
---|
| Page Tree |
---|
root | Running Jobs on Setonix |
---|
|
|
|
Related pages