This page describes how to run JupyterLab in a container on Pawsey systems with Slurm. This involves launching JupyterLab and then connecting to the Jupyter server.
Overview
The first step in using JupyterLab is to make it available on the supercomputer. Given the possibly long list of dependencies, it is better to use a container rather than installing it in the traditional way. The first section shows how you can get the container image of JupyterLab. Then, you will need to prepare a batch script to execute the JupyterLab server on a compute node. Finally, when you are finished, you will need to clean up the session.
Getting the container images
There are a number of good resources for prebuilt Jupyter and RStudio Docker images:
- Jupyter Docker Stacks (external site) provides prebuilt Jupyter images designed for Tensforflow, Spark, and data science workflows, which are available on DockerHub.
- Rocker has prebuilt RStudio images available on DockerHub.
You can use these as base images to install additional packages if needed. Once you have your desired image built we can submit a batch script that launches the container.
For this example, we're going to be using the jupyter/datascience-notebook (external site) Docker image. It provides a Conda environment with a large collection of common Python packages (including NumPy, SciPy, Pandas, Scikit-learn, Bokeh and Matplotlib), an R environment (with the tidyverse (external site) packages), and a Julia environment. All of these are accessible via a Jupyter notebook server.
This Docker image ships with a startup script that allows for a number of runtime options to be specified. Most of these are specific to running a container using Docker; we will focus on how to run this container using Singularity.
The datascience-notebook
image has a default user, jovyan
, and it assumes that you will be able to write to /home/jovyan
. When you run a Docker container via Singularity, you will be running as your Pawsey username inside the container, so we won't be able to write to /home/jovyan
. Instead, we can mount a specific directory (on Pawsey's filesystems) into the container at /home/jovyan
. This will allow our Jupyter server to do things like save notebooks and write checkpoint files, and those will persist on Pawsey's filesystem after the container has stopped.
Setting up the batch script
The following script launches a Jupyter notebook on a compute node. The first step is to enter a writable directory with some space, such as /scratch
, to launch the notebook. Create a directory where you will start our Jupyter notebook container and put any relevant data or Jupyter notebooks in this directory. This is also the directory that will be mounted to /home/jovyan.
Run your Jupyter notebook server
To start, submit the SLURM jobscript. It will take a few minutes to start (depending on how busy the queue and how large of an image you're downloading). Once the job starts you will have a SLURM output file in your directory, which will have instructions on how to connect at the end.
In a separate local terminal window, run SSH based on the command listed in the output file:
ssh -N -f -L 8888:nid001007:8888 <username>@setonix.pawsey.org.au
After this step, you can open up a web browser and use the address displayed in the output file to access your Jupyter notebook. In this example the address is:
http://127.0.0.1:8888/?token=3291a7b1e6ce7791f020df84a7ce3c4d2f3759b5aaaa4242
Alternatively, you could go to the web address http://27.0.0.1:8888 or http://localhost:8888, and then when prompted insert the token string that comes after "?token=" above. (Note that your port number might differ from "8888".)
Figure 1. Jupyter authentication page
Note:
The information above is a notebook launched on setonix.pawsey.org.au
. Ensure that you look at your output to select the correct machine.
Also note that the available version of the singularity module in the system may have changed, so that you need to adapt the script accordingly.
From the Jupyter notebook menu, you can create a new notebook and start from there. If you already count with a notebook that you want to execute/develop, you may need to copy it into the jupiter-dir
first.
Clean up when you are finished
Once you have finished (and saved and exited from the Jupyter notebook instance):
- From the Pawsey cluster, cancel your job with
scancel
. From your own computer, kill the SSH tunnel, based on the command displayed in the output file:
kill $( ps x | grep 'ssh.*-L *8888:nid001007:8888' | awk '{print $1}' )
External links
- DockerHub
- For information about runtime options supported by the startup script in the Jupyter image, see Common Features in the Jupyter Docker Stacks documentation
- The Rocker Project ("Docker Containers for the R Environment")