Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Excerpt

This page describes how to run JupyterLab in a container on Pawsey systems with Slurm. This involves launching JupyterLab and then connecting to the Jupyter server.

...

For this example, we're going to be using the jupyter/datascience-notebook (external site) Docker image. It provides a Conda environment with a large collection of common Python packages (including NumPy, SciPy, Pandas, Scikit-learn, Bokeh and Matplotlib), an R environment (with the tidyverse (external site) packages), and a Julia environment. All of these are accessible via a Jupyter notebook server.

This Docker image ships with a startup script that allows for a number of runtime options to be specified. Most of these are specific to running a container using Docker; we will focus on how to run this container using  Singularity.

The datascience-notebook image has a default user, jovyan, and it assumes that you will be able to write to /home/jovyan. When you run a Docker container via Singularity, you will be running as your Pawsey username inside the container, so we won't be able to write to /home/jovyan. Instead, we can mount a specific directory (on Pawsey's filesystems) into the container at /home/jovyan. This will allow our Jupyter server to do things like save notebooks and write checkpoint files, and those will persist on Pawsey's filesystem after the container has stopped.

...

Column
width900px


Code Block
languagepy
themeEmacs
titleListing 3. Simple GPU-enabled Python code snippet
collapsetrue
# key GPU library
from numba import cuda
import numpy as np

# define some kernels
@cuda.jit
def add_kernel(x, y, out):
    idx = cuda.grid(1)
    out[idx] = x[idx] + y[idx]

n = 4096
x = np.arange(n).astype(np.int32) # [0...4095] on the host
y = np.ones_like(x)               # [1...1] on the host
out = np.zeros_like(x)

# cuda commands to copy memory to the device 
d_x = cuda.to_device(x)
d_y = cuda.to_device(y)
d_out = cuda.to_device(out)

# run kernel
threads_per_block = 128
blocks_per_grid = 32
add_kernel[blocks_per_grid, threads_per_block](d_x, d_y, d_out)
cuda.synchronize()

# output result 
print(d_out.copy_to_host()) # Should be [1...4096]


External links

  • DockerHub
  • For information about runtime options supported by the startup script in the Jupyter image, see Common Features in the Jupyter Docker Stacks documentation
  • The Rocker Project ("Docker Containers for the R Environment")