Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Panel
titleThis page:

Table of Contents

In this section, we're going to assume you have created a cluster or instances on your private network, but only one of those instances has a public floating IP address. This means that all of those instances can connect happily between each other on the private network (via

...

their 192.168.x.x

...

 address), but you can only connect to one of them directly (via

...

146.118.x.x

...

 address). Ideally you want a way to access and manage that cluster as a whole. In this case, we have a test cluster of 5 instances:


Instance NamePrivate AddressPublic Address
test-instance-1192.168.1.52146.118.113.9
test-instance-2192.168.1.56
test-instance-3192.168.1.58
test-instance-4192.168.1.53
test-instance-5192.168.1.62


Of course there is nothing stopping you from assigning a public IP address to every instance in your cluster, however you may prefer to keep external exposure to a minimum, especially if the bulk of the work the cluster will be doing is on the private network.


SSH Access


All 5 instances should have your SSH key loaded onto them, however only one (test-instance-1) you will be able to SSH to directly from your desktop:

Code Block
phi216@shinobu-kf:~$ ssh ubuntu@146.118.113.9


In theory, you should be able to SSH

...

from test-instance-1

...

 to any of the other instances. However, as the instances are by default passwordless, you will need to use your SSH key to connect to them. There are a couple of different ways this can be done.


Copy SSH Key

The most direct approach is to manually copy your SSH key (both the private and public components) from your desktop

...

to test-instance-1. That way you can use your credentials directly when SSHing to any other instance in the cluster:

Code Block
ubuntu@test-instance-1:~$ ls -l .ssh/id_rsa*
-rw-rw-r-- 1 ubuntu ubuntu 1766 May 11 01:22 .ssh/id_rsa
-rw-r--r-- 1 ubuntu ubuntu  398 May 11 01:22 .ssh/id_rsa.pub

ubuntu@test-instance-1:~$ ssh 192.168.1.56

The main drawback with this approach is that you may not want to have your SSH key stored on more machines than necessary, from a security perspective.


SSH Agent Forwarding

If you have a Linux or Mac based desktop, you can instead forward your SSH credentials from your desktop through your SSH connection

...

to test-instance-1. This is done

...

using ssh-agent. The exact process may vary depending on your specific operating system; the steps outlined below are for setting this up under Ubuntu 16.04.


1)

...

 If it isn't already started,

...

run ssh-agent

...

 on your desktop:

Code Block
phi216@shinobu-kf:~$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-1UA8zzylL6RV/agent.2236; export SSH_AUTH_SOCK;
SSH_AGENT_PID=2237; export SSH_AGENT_PID;
echo Agent pid 2237;

phi216@shinobu-kf:~$ ps -ef | grep ssh-agent | grep -v grep
phi216    2237     1  0 10:06 ?        00:00:00 ssh-agent


2)

...

 You can

...

use ssh-add

...

 to check

...

that ssh-agent

...

 is running correctly. If it reports not being able to connect to an authentication agent, most likely it is because the environmental

...

variable SSH_AUTH_SOCK

...

 has not been exported correctly:

Code Block
phi216@shinobu-kf:~$ ssh-add -l
Could not open a connection to your authentication agent.

phi216@shinobu-kf:~$ echo $SSH_AUTH_SOCK


To correct this, manually copy and run the first line of output when you first

...

ran ssh-agent (without the semi-colon at the end) to set

...

up SSH_AUTH_SOCK:

Code Block
phi216@shinobu-kf:~$ SSH_AUTH_SOCK=/tmp/ssh-1UA8zzylL6RV/agent.2236; export SSH_AUTH_SOCK

phi216@shinobu-kf:~$ echo $SSH_AUTH_SOCK
/tmp/ssh-1UA8zzylL6RV/agent.2236


It is also worth confirming that the temporary socket this points to actually exists:

Code Block
phi216@shinobu-kf:~$ ls -l /tmp/ssh-1UA8zzylL6RV/agent.2236
srw------- 1 phi216 phi216 0 May 11 10:06 /tmp/ssh-1UA8zzylL6RV/agent.2236


3)

...

 With ssh-agent up and running correctly, you can now add your SSH key. By default, the list of cached SSH keys should be empty:

Code Block
phi216@shinobu-kf:~$ ssh-add -l
The agent has no identities.


Add your SSH key (in this

...

example id_rsa):

Code Block
phi216@shinobu-kf:~$ ssh-add ~/.ssh/id_rsa
Enter passphrase for /home/phi216/.ssh/id_rsa:
Identity added: /home/phi216/.ssh/id_rsa (/home/phi216/.ssh/id_rsa)

phi216@shinobu-kf:~$ ssh-add -l
2048 SHA256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx /home/phi216/.ssh/id_rsa (RSA)


4)

...

 If ForwardAgent

...

 hasn't already been enabled for SSH on your desktop (normally this would be found

...

in /etc/ssh/ssh_config

...

 if it is), you will need to enable it locally for your account

...

in ~/.ssh/config (create the file if it doesn't already exist). While you can enable it for all systems you SSH to, ideally you should restrict it

...

to test-instance-1:

Code Block
phi216@shinobu-kf:~$ cat ~/.ssh/config
Host 146.118.113.9
  ForwardAgent yes


Also ensure that this config file isn't world-writable:

Code Block
phi216@shinobu-kf:~$ ls -l ~/.ssh/config
-rw-r--r-- 1 phi216 phi216 38 May 10 17:05 /home/phi216/.ssh/config


5)

...

 Now when you SSH

...

to test-instance-1, not only will it cache your passphrase for your SSH key (if it has one), but you will also be able to SSH from test-instance-1 to any other instance in your private cluster as if your SSH credentials were stored locally on that instance:

Code Block
phi216@shinobu-kf:~$ ssh ubuntu@146.118.113.9
---8<---
ubuntu@test-instance-1:~$ ssh 192.168.1.56


Hosts File


It is probably also worth adding all of the instances in your private cluster

...

to /etc/hosts

...

 on test-instance-1. That way, connecting to them will be simpler. Just add the entries to the end of your hosts file:

Code Block
ubuntu@test-instance-1:~$ grep 192.168 /etc/hosts
192.168.1.52 test-instance-1
192.168.1.56 test-instance-2
192.168.1.58 test-instance-3
192.168.1.53 test-instance-4
192.168.1.62 test-instance-5


You can test this easily:

Code Block
ubuntu@test-instance-1:~$ for i in {2..5}; do ssh test-instance-$i 'ip a | grep 192.168'; done
    inet 192.168.1.56/24 brd 192.168.1.255 scope global ens3
    inet 192.168.1.58/24 brd 192.168.1.255 scope global ens3
    inet 192.168.1.53/24 brd 192.168.1.255 scope global ens3
    inet 192.168.1.62/24 brd 192.168.1.255 scope global ens3


Using pdsh


When running commands across all instances in the cluster, there are a number of tools that allow you to run parallel commands. One of the simplest ones

...

is pdsh, which you only need to install

...

on test-instance-1:

Code Block
ubuntu@test-instance-1:~$ sudo apt-get install pdsh
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  genders libgenders0 libltdl7
Suggested packages:
  rdist
The following NEW packages will be installed:
  genders libgenders0 libltdl7 pdsh
0 upgraded, 4 newly installed, 0 to remove and 37 not upgraded.
Need to get 207 kB of archives.
After this operation, 586 kB of additional disk space will be used.


Once installed, you should

...

configure pdsh

...

 to use SSH by default

...

for rcmd, done by creating the

...

file /etc/pdsh/rcmd_default (this will need to be done

...

as root) which only contains "ssh":

Code Block
ubuntu@test-instance-1:~$ cat /etc/pdsh/rcmd_default
ssh


You should also create a genders file (again

...

as root) that contains the names of all of the other instances in the cluster:

Code Block
ubuntu@test-instance-1:~$ cat /etc/genders
test-instance-2
test-instance-3
test-instance-4
test-instance-5


Now you can run "pdsh -a" to run a specific command across all instances simultaneously:

Code Block
ubuntu@test-instance-1:~$ pdsh -a 'ip a | grep 192.168'
test-instance-4:     inet 192.168.1.53/24 brd 192.168.1.255 scope global ens3
test-instance-2:     inet 192.168.1.56/24 brd 192.168.1.255 scope global ens3
test-instance-3:     inet 192.168.1.58/24 brd 192.168.1.255 scope global ens3
test-instance-5:     inet 192.168.1.62/24 brd 192.168.1.255 scope global ens3



Panel
titleAdvanced topics:

Child pages (Children Display)
pageAdvanced Topics & Troubleshooting