Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For batch jobs, the default location for stdout and stderr is a file named slurm- <jobid> .out in the working directory from which the job was submitted. (<jobid> is replaced by the numeric Slurm job ID.) To change the name of the standard output file, use the --output=<stdoutfile> option. To change the name of the standard error file, use the --error=<stderrfile> option. You can use the special token %j to include the unique job ID in the filename.

...

Table 2. Common options

...

for sbatch  and salloc

...

Table 2. Common options for sbatch  and salloc
Expand
titleClick here to show /hide.
the table


OptionReduced
syntax
Purpose
--account=<project> -A <project>Set the project code to which the job is to be charged. A default project is configured for each Pawsey user.
--nodes=<N> -N <N>Request N nodes for the job.
--ntasks=<n> -n <n>Specify the maximum number oftasks or processes that each job step will run. On supercomputers with exclusive access to nodes, specify a multiple of the total number of cores available on a node for efficient use of resources.
--ntasks-per-node=<nN>
Specify the number of tasks per node.
--cpus-per-task=<c> -c <c>Specify the number of physical or logical cores per task.
--mem=<size>
Specify the real memory required per node. The given value should be an integer. Different units can be specified using the suffix K,M or G. Default units are megabytes.
--mem-per-cpu=<size>
Specify the minimum memory required per CPU core. The given value should be an integer. Different units can be specified using the suffix K,M or G. Default units are megabytes.
--exclusive
Indicate that all resources from the requested nodes are going to be granted with exclusive access (contrary to the default scheduling which shares the resources of nodes among different jobs: shared access)
--gres=gpu:<nG>
Specify the required number of GPUs per node.
--time=<timeLimit> -t <timeLimit>Set the wall-clock time limit for the job (hh:mm:ss). If the job exceeds this time limit, it will be subject to termination.
--job-name=<jobName> -J <jobName>Set the job name (as it will be displayed by squeue). This defaults to the name of the batch script.
--output=<stdoutFile> -o <stdoutFile>(sbatch only) Set the file name for standard output. Use the token %j to include the job ID.
--error=<stderrFile> -r <stderrFile>(sbatch only) Set the file name for standard error. Use the token %j to include the job ID.
--partition=<partition> -p <partition>Request an allocation on the specified partition. If this option is not specified, jobs will be submitted to the default partition.
--qos=<qos> -q <qos>Request to run the job with a particular Quality of Service (QoS).
--array=<indexList> -a <indexList>(sbatch only) Specify an array job with the defined indices.
--dependency=<dependencyList> -d <dependencyList>Specify a job dependency.
--mail-type=<eventList>
Request an e-mail notification for events in eventlist. Valid event values include BEGINENDFAIL and ALL. Multiple values can be specified in a comma-separated list.
--mail-user=<address>
Specify an e-mail address for event notifications.
--export=<variables>
(sbatch only) Specify which environment variables are propagated to the batch job. Valid only as a command-line option. The recommended value is NONE.
--distribution=<distributionMethod>-m <distributionMethod>Specifies the distribution methods of allocation of cores


When running jobs through Slurm, the unix group ownership of new files generated by the job execution is the one given to Slurm using the --account option (-A). This is usually your project group, so others in your project can read the files if they and the directory (and relevant parent directories) have the group-read attribute.

...

...