Create a Kubernetes Cluster
These instructions will help you create a Kubernetes cluster on a set of Nimbus instances. This procedure takes approximately two hours of elapsed time.
Procedure
Before beginning, ensure that you have a running Ubuntu instance on Nimbus, and that you can connect to it.
1. Install Juju on your seed instance
1.1. Run this command to install Juju. It will take approximately 15 minutes to process.
sudo snap install juju --classic
1.2. When processing completes, list the clouds Juju has preconfigured. Note that Nimbus is not listed:
juju clouds --all
1.3a. Create a working directory in your home directory:
mkdir ~/juju
1.3. Add configuration for a Nimbus cloud.
- Further details can be found at https://juju.is/docs/openstack-cloud
cat > ~/juju/nimbus-cloud.yaml <<EOF clouds: nimbus: type: openstack auth-types: [userpass] endpoint: https://nimbus.pawsey.org.au:5000/v3 regions: RegionOne: endpoint: https://nimbus.pawsey.org.au:5000/v3 EOF juju add-cloud --client nimbus ~/juju/nimbus-cloud.yaml juju clouds --all
- There should now be a 'nimbus' entry in the listing
1.4. Add your credentials for the Nimbus cloud:
cat > ~/juju/credentials.yaml <<EOF credentials: nimbus: CHANGE_THIS_USERNAME-nimbus: auth-type: userpass password: 'CHANGE_THIS_PASSWORD' project-domain-name: pawsey tenant-name: CHANGE_THIS_PROJECT user-domain-name: pawsey username: CHANGE_THIS_USERNAME EOF # store your nimbus username, project name, and password in these variables username=OS_USERNAME project=OS_PROJECT_NAME read -s OS_PASSWORD # then replace them in the file sed -i'' -e "s/CHANGE_THIS_PROJECT/$OS_PROJECT_NAME/" ~/juju/credentials.yaml sed -i'' -e "s/CHANGE_THIS_USERNAME/$OS_USERNAME/" ~/juju/credentials.yaml sed -i'' -e "s/CHANGE_THIS_PASSWORD/$OS_PASSWORD/" ~/juju/credentials.yaml unset OS_PASSWORD juju add-credential --client nimbus -f ~/juju/credentials.yaml rm ~/juju/credentials.yaml
2. Prepare an image for use by Juju
2.1. Run these commands to configure the metadata for an image (click for details). In this case we have given the IMAGE_ID for the Pawsey Ubuntu 18.04 image "Pawsey - Ubuntu 18.04 - 2022-05" (updated as of May 2022).
- Further details can be found at https://juju.is/docs/cloud-image-metadata
IMAGE_ID=97c4562f-1087-4f2c-abc6-a7b02fb9f9b9 OS_SERIES=bionic REGION=RegionOne OS_AUTH_URL=https://nimbus.pawsey.org.au:5000/v3 mkdir ~/juju/simplestreams juju metadata generate-image -d ~/juju/simplestreams -i $IMAGE_ID -s $OS_SERIES -r $REGION -u $OS_AUTH_URL
If you want to instead use the Pawsey Ubuntu 20.04 image "Pawsey - Ubuntu 20.04 - 2022-05", replace the first two lines above with:
IMAGE_ID=67bab16e-453b-46a8-a262-c0796fa35d85 OS_SERIES=focal
3. Use Juju to bootstrap the controller instance
During this step, you need to get the network ID of your private network. This should be connected via your router to the Public external network. You can check this at the Network Topology page. To bootstrap the controller instance:
3.1. Go to the Networks page.
3.2. Click on your network, which will be named something like projectname-network
3.3. Select and copy the ID (not Project ID), ready to paste into the NETID=... command below.
3.4. Back on the seed instance, run the following commands. The final bootstrap command will take approximately 20 minutes to process. While it is running, you can check https://nimbus.pawsey.org.au/project/instances/ to see a new controller instance being created.
NETID=YOUR_NET_ID # paste your network ID here instead of the YOUR_NET_ID PUBEXT=dfb2cfd9-b746-410d-ab4b-f2e7d5bafacf juju bootstrap nimbus nimbus-k8s-controller \ --constraints "arch=amd64" \ --metadata-source ~/juju/simplestreams \ --model-default network=$NETID \ --model-default external-network=$PUBEXT \ --model-default use-floating-ip=true \ --bootstrap-constraints "instance-type=n3.1c4r"
If you are using an Ubuntu 20.04 image, you must also include the flag "--bootstrap-series=focal" in the above "juju bootstrap" command.
4. Create a new model for the Kubernetes deployment
Models are Juju's concept of a workspace, so it is a good idea to create one for each deployment, to encapsulate them. This is true even with only one deployment. Refer to https://juju.is/docs/models for more information.
4.1. Create the model, then switch
to ensure you are using it.
juju add-model nimbus-k8s-model juju switch nimbus-k8s-model
From here onward, you can revert to this point in the instructions by deleting the model and recreating it:
juju destroy-model -y nimbus-k8s-model
Then go back and start again from step 4.
5. Use Juju to deploy Kubernetes, install kubectl
5.1. Install Kubernetes.
- Installing Kubernetes takes approximately one hour.
- If you are familiar with the
screen
program, this would be a good time to use it.
- If you are familiar with the
- View the status command to see installation progress.
- Once everything is active and started the installation is ready to use.
- Note: This approach installs Kubernetes with default machine constraints. For further information, including increasing those constraints, refer to https://jaas.ai/canonical-kubernetes
juju status wget https://api.jujucharms.com/charmstore/v5/bundle/charmed-kubernetes-541/archive/bundle.yaml sed -i'' -e "s/focal/bionic/" bundle.yaml juju deploy ./bundle.yaml watch -c 'juju status --color' # Ctrl-C to stop the 'watch' command
5.2. While waiting, use another window to install the Kubernetes controller client kubectl. This will take 10 minutes.
sudo snap install kubectl --classic
6. Configure the kubectl Kubernetes client
6.1. Use the following commands to install and configure the client:
mkdir -p ~/.kube juju scp kubernetes-master/0:config ~/.kube/config kubectl get all kubectl cluster-info
7. (Optional) Run an autoscaling stress test on Kubernetes
7.1. Use the commands below to run a stress test.
- These steps are taken from https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
7.2. Launch a web server deployment and create an autoscaler:
kubectl apply -f https://k8s.io/examples/application/php-apache.yaml kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 kubectl get hpa
7.3. In a separate window, generate load on the deployment:
kubectl run -it --rm load-generator --image=busybox /bin/sh If you don't see a command prompt, try pressing enter. / # while true; do wget -q -O- http://php-apache; done OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!...
7.4. In the first window again, oberve the scaling as it stabilises:
while true; do kubectl get hpa; sleep 30; done NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 0%/50% 1 10 1 62s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 250%/50% 1 10 4 92s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 69%/50% 1 10 5 2m3s # Ctrl-C to stop the while loop
7.5. Stop and delete the pod and autoscaler:
kubectl delete hpa php-apache kubectl delete -f https://k8s.io/examples/application/php-apache.yaml