Skip to end of banner
Go to start of banner

How to Migrate from PBS Pro to Slurm

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Pawsey supercomputers use Slurm as a job scheduling system. If your workflow is designed for PBS Pro, this page helps you to migrate from PBS Pro to Slurm.

On this Page

There are two main aspects involved in the migration:

  1. Learning how to interact with Slurm.
  2. Making some minor changes to the directives and referenced environment variables in your batch scripts to use the Slurm equivalents.

The following sections detail commonly used PBS Pro commands, options and environment variables and their Slurm equivalents.

Command comparison 

Table 1 lists the most commonly used commands in PBS Pro and their equivalents in SLURM.


Table 1. PBS Pro commands and their Slurm equivalents

Command PBS Pro Slurm
Submit a batch job qsubsbatch
Submit an interactive job qsub -Isalloc
Delete a job qdel <job id>scancel <job id>
Job status qstatsqueue
Hold a job qhold <job id>scontrol hold <job id>
Release a job qrls <job id>scontrol release <job id>
Cluster status qstat -Bsinfo

The SLURM commands behave in a similar way to their PBS Pro equivalents, though some (such as squeue) produce somewhat different output.

Submitting a batch job: the sbatch command

The sbatch command replaces qsub, submitting the job and printing the submitted job ID in the following format.

Terminal 1. Submitting a job using Slurm's sbatch
$ sbatch slurm.script.sh
Submitted batch job 16778

For batch jobs that capture the job ID to create job dependencies, the format is different from what qsub prints.

Terminal 2. Submitting a job using PBS Pro's qsub
$ qsub pbs.script.sh
417462.ps1

Displaying job status: the squeue command

The output generated by squeue is similar to that of qstat, however the default fields and order are quite different in most cases.

Terminal 3. Querying the status of the queues in Slurm.
$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
16793 work slurm.sc pryan R 0:01 4 nid000[32-35]
16794 work slurm.sc pryan PD 0:00 2 (Dependency)

For comparison purposes, the output of qstat is shown in terminal 4.

Terminal 4. Using qstat to interrogate the status of the queues in PBS Pro.
$ qstat
ps1:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
417465.ps1 pryan debug pbs.script 30430 8 128 -- 00:45 R 00:00
417466.ps1 pryan debug pbs.script -- 8 128 -- 00:45 Q --

Submitting requests from an interactive session: the salloc command

For interactive jobs, Slurm uses a separate command, salloc, in place of the qsub -I of PBS Pro.

Option comparison 

Table 2 lists the most commonly used directives in PBS Pro and their equivalents in Slurm. These can be used with the sbatch or salloc commands or as directives inside a batch script.

Table 2. List of options for PBS Pro and Slurm

Option PBS Pro Slurm
Script directive #PBS#SBATCH
Job name -N [name]--job-name=[name]
Queue -q [queue]--partition=[queue]
Accounting -W group_list=[acct]--account=[acct]
Wall clock limit -l walltime=[hh:mm:ss]--time=[hh:mm:ss]
Select -l select=[chunk]--nodes=[chunk]
Node count -l nodes=[count]--nodes=[count]
CPU count-l mpiprocs=[count]
-l ppn=[count]
-l mppwidth=[count]

--ntasks-per-node=[count]

(alternatively use --ntasks option)

OpenMP threads -l ompthreads=[nthr]--cpus-per-task=[nthr]
Memory size-l mem=[MB]--mem=[mem][M|G|T]
--mem-per-cpu=[mem][M|G|T]
Standard output file -o [filename]--output=[filename]
Standard error file -e [filename]--error=[filename]
Combine stdout/stderr -j oe (to stdout)(this is the default behaviour if --output is used without –-error)
Copy environment -V--export=ALL (default)
Copy environment variable -v [var]--export=var
Job dependency -W depend=[state:jobid]--dependency=[state:jobid]
Event notification -m abe--mail-type=[events]
Email address -M [address]--mail-user=[address]

To convert the PBS Pro select statement, use the --nodes and --ntasks-per-node options. For example, listing 2 shows the Slurm equivalent of the PBS Pro directive shown in listing 1.

Listing 1. Selecting 10 nodes with 16 cores each, to run 1 tasks per core, in PBS Pro
#PBS -l select=10:ncpus=16:mpiprocs=16
Listing 2. Requesting 10 nodes, 16 tasks per node in Slurm
#SBATCH --nodes=10
#SBATCH --ntasks-per-node=16

Alternatively, the --ntasks option can be used to specify the total number of tasks, rather than using --nodes.

Listing 3. Using --ntasks to infer the number of nodes
#SBATCH --ntasks=160
#SBATCH --ntasks-per-node=16

Slurm batch script

The general concepts behind batch scripting are not different with Slurm. There is still a preamble portion of the script in which directives are given to the batch system, followed by the job commands. The main difference is that the script directive, the sequence of characters with which a directive must begin, is #SBATCH instead of #PBS, and the arguments differ slightly as well. We will begin by examining a simple script shown in listing 4.

Listing 4. A simple Slurm batch script
#!/bin/bash -l
# 10 nodes, 32 MPI processes/node, 320 MPI processes total
#SBATCH --job-name="myjob"
#SBATCH --time=02:00:00
#SBATCH --ntasks=320
#SBATCH --ntasks-per-node=32
#SBACTH --cpus-per-task=1
#SBATCH --mem=58G
#SBATCH --output=myjob.%j.o
#SBATCH --error=myjob.%j.e
#SBATCH --account=projectcode
#SBATCH --export=NONE
#======START=====
echo "The current job ID is $SLURM_JOB_ID"
echo "Running on $SLURM_JOB_NUM_NODES nodes"
echo "Using $SLURM_NTASKS_PER_NODE tasks per node"
echo "A total of $SLURM_NTASKS tasks is used"
echo "Node list:"
sacct --format=JobID,NodeList%100 -j $SLURM_JOB_ID

# -----Executing command:
srun -u -N $SLURM_JOB_NUM_NODES -n $SLURM_NTASKS -c $SLURM_CPUS_PER_TASK ./a.out
#=====END====


Line 1 invokes the shell (bash), and line 2 is a comment.

Lines 3 to 12 contain the script directives. Line 3 gives the job a name. Line 4 requests 2 hours of walltime. Line 5 requests 320 MPI processes, and line 6 requests 32 processes per node. Line 7 specifies the number of cores per MPI tasks. Line 8 specifies the amount of memory needed per node. Line 9 specifies the name of the output file (%j is the job number), and line 10 specifies the file to which errors should be written out. Line 11 gives the account to which this walltime should be charged.

Line 12 is an optional separator between the script directive preamble and the actions in the script.

Lines 14-19 are useful (but optional) diagnostic information that will be printed out.

Line 22 invokes srun  to run the code (./a.out).

Lines 13,21 & 23 are optional separator/comments marking sections of the script.

From this script, you can see that the same concepts you already know from PBS Pro apply to Slurm as well. You must tell the scheduler how long the job will run for, and how many processors are required. You provide it with project accounting information. And there are other optional arguments that help the script to run. Within the body of the script, we can invoke the usual scripting utilities such as echo, and launch our application with the usual commands.

Environment variable comparison 

Table 3 lists some commonly used environment variables that are set by Slurm for each job, along with their PBS Pro equivalents. These variables can be used in the body of the script as well, as shown in listing 4.


Table 3. Common environment variables in PBS Pro and Slurm

Environment Variable PBS Pro Slurm
Job ID PBS_JOBIDSLURM_JOB_ID
Submit directory PBS_O_WORKDIRSLURM_SUBMIT_DIR*†
Submit host PBS_O_HOSTSLURM_SUBMIT_HOST
Node list PBS_NODEFILESLURM_JOB_NODELIST
Job Array Index PBS_ARRAY_INDEXSLURM_ARRAY_TASK_ID

Footnotes

* PBS_O_WORKDIR and SLURM_SUBMIT_DIR both contain the name of the working directory from which the user submitted the job.
When using Slurm it is not necessary to explicitly change to this directory, as this is done by default.

† When the --export=NONE option is used (as recommended) it is not defined.

 PBS_NODEFILE points to a file containing the nodes allocated to the job. SLURM_JOB_NODELIST contains a regular expression listing the nodes. For example:

SLURM_JOB_NODELIST=nid000[32-39]

To expand the nodes explicitly, use the following command:

scontrol show hostnames $SLURM_JOB_NODELIST

Related pages

  • No labels