...
Table 1. Slurm partitions for production jobs and data transfers on Setonix
Excerpt |
---|
name | SetonixProductionPartitions |
---|
|
Column |
---|
Name | N. Nodes | Cores per node | Available node-RAM for jobs | GPU chiplets per node | Types of jobs supported | Max Number of Nodes per Job | Max Wall time | Max Number of Concurrent Jobs per User | Max Number of Jobs Submitted per User |
---|
work | 1376 | 2x 64 | 230 GB | n/a | Supports CPU-based production jobs. | - | 24h | 256 | 1024 | long | 8 | 2x 64 | 230 GB | n/a | Long-running CPU-based production jobs. | 1 | 96h | 4 | 96 | highmem | 8 | 2x 64 | 980 GB | n/a | Supports CPU-based production jobs that require a large amount of memory. | 1 | 96h | 2 | 96 | gpu | 134 | 1x 64 | 230 GB | 8 | Supports GPU-based production jobs. | - | 24h | - | - | gpu-highmem | 38 | 1x 64 | 460 GB | 8 | Supports GPU-based production jobs requiring large amount of host memory. | - | 24h | - | - | copy | 7 | 1x 32 | 115 GB | n/a | Copy of large data to and from the supercomputer's filesystems. | - | 48h | 4 | 2048 | askaprt | 180 | 2x 64 | 230 GB | n/a | Dedicated to the ASKAP project (similar to work partition) | - | 24h | 8192 | 8192 | casda | 1 | 1x 32 | 115 GB | n/a | Dedicated to the CASDA project (similar to copy partition) | - | 24h | 30 | 40 | mwa | 10 | 2x 64 | 230 GB | n/a | Dedicated to the MWA projects (similar to work partition) | - | 24h | 1000 | 2000 | mwa-asvo | 10 | 2x 64 | 230 GB | n/a | Dedicated to the MWA projects (similar to work partition) | - | 24h | 1000 | 2000 | mwa-gpu | 10 | 1x 64 | 230 GB | 8 | Dedicated to the MWA projects (similar to gpu partition) | - | 24h | 1000 | 2000 | mwa-asvocopy | 2 | 1x 32 | 115 GB | n/a | Dedicated to the MWA projects (similar to copy partition) | - | 48h | 32 | 1000 |
|
|
Table 2. Slurm partitions for debug and development on Setonix
Excerpt |
---|
name | SetonixDebugPartitions |
---|
|
Column |
---|
Name | N. Nodes | Cores per node | Available node-RAM for jobs | GPU chiplets per node | Types of jobs supported | Max Number of Nodes per Job | Max Wall time | Max Number of Concurrent Jobs per User | Max Number of Jobs Submitted per User |
---|
debug | 8 | 2x 64 | 230 GB | n/a | Exclusive for development and debugging of CPU code and workflows. | 4 | 1h | 1 | 4 | gpu-dev | 10 | 1x 64 | 230 GB | 8 | Exclusive for development and debugging of GPU code and workflows. | 2 | 4h | 1 | 4 |
|
|
Debug and Development Partitions Policy
Insert excerpt |
---|
| Job Scheduling and Partitions Use Policies |
---|
| Job Scheduling and Partitions Use Policies |
---|
name | Debug and Development Partitions Policy |
---|
nopanel | true |
---|
|
...