Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Note that X11 forwarding is enabled by default in the interactive queue.

We Nevertheless, we recommend users to use FastX, a Pawsey's web-based remote-visualisation service on Topaz, to launch compute-intensive visualisation packages such as ParaView, VisIt or VMD. Refer to the /wiki/spaces/VIS/pages/58303256 support page for more informationPlease refer to Setonix Remote Visualisation documentation for detailed instructions of access and use.

Packing serial/small multithreaded jobs 

...

Column
width900px


Code Block
languagebash
themeEmacs
titleListing 21. GPU job array example
#!/bin/bash --login

#SBATCH --account=[your-account]-gpu
#SBATCH --array=0-7
#SBATCH --partition=gpu
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --gpu=1
#SBATCH --time=00:10:00


#Go to the right directory for this instance of the job array using SLURM_ARRAY_TASK_ID as the identifier:
#We are assuming all the input files needed for each specific job reside in the corresponding working directory
cd workingDir_${SLURM_ARRAY_TASK_ID}

#Run the hip executable (assuming the same executable will be used by each job, and that it resides in the submission directory):
srun -u -N 1 -n 1 ${SLURM_SUBMIT_DIR}/main_hip


When to use job packing: For nodes where resources are exclusive and cannot be shared among different users/jobs at the same time (like nvlinkq partition in Topaz) the best practise is to to use job packing When your workflow requieres the execution of several jobs, but each individual job does not requiere the whole resources of a node. In that case, you might consider to use job packing and execute your several jobs in the same node at the same time. Job packing allows to use all (or most) of the resources of the node. Ideally, multiple jobs should be packed in order to make use of the four all available GPUs in the node and jobs should have a similar estimated execution time to avoid load balancing issues. (Obviously if a single job can make use of all the four GPUs, that is also desirable and that would not need packing.) We do not recommend packing jobs across  across multiple nodes with the same job script due to possible load balancing issues: all resources will be held and unavailable to other users/jobs until the last substep (job) in the packing finishes.

...