...
In this section, we're going to assume you have created a cluster or VMs instances on your private network, but only one of those VMs instances has a public floating IP address. This means that all of those VMs instances can connect happily between each other on the private network (via their 192.168.x.x address), but you can only connect to one of them directly (via a 146.118.x.x address). Ideally you want a way to access and manage that cluster as a whole. In this case, we have a test cluster of 5 VMsinstances:
Instance Name | Private Address | Public Address |
---|---|---|
test- |
instance-1 | 192.168.1.52 | 146.118.113.9 |
test- |
instance-2 | 192.168.1.56 | |
test- |
instance-3 | 192.168.1.58 | |
test- |
instance-4 | 192.168.1.53 | |
test- |
instance-5 | 192.168.1.62 |
Of course there is nothing stopping you from assigning a public IP address to every VM instance in your cluster, however you may prefer to keep external exposure to a minimum, especially if the bulk of the work the cluster will be doing is on the private network.
SSH Access
All 5 VMs instances should have your SSH key loaded onto them, however only one (test-vminstance-1) you will be able to SSH to directly from your desktop:
...
In theory, you should be able to SSH from test-vminstance-1 to any of the other VMsinstances. However, as the VMs instances are by default passwordless, you will need to use your SSH key to connect to them. There are a couple of different ways this can be done.
Copy SSH Key
The most direct approach is to manually copy your SSH key (both the private and public components) from your desktop to test-vminstance-1. That way you can use your credentials directly when SSHing to any other VM instance in the cluster:
Code Block |
---|
ubuntu@test-vminstance-1:~$ ls -l .ssh/id_rsa* -rw-rw-r-- 1 ubuntu ubuntu 1766 May 11 01:22 .ssh/id_rsa -rw-r--r-- 1 ubuntu ubuntu 398 May 11 01:22 .ssh/id_rsa.pub ubuntu@test-vminstance-1:~$ ssh 192.168.1.56 |
The main drawback with this approach is that you may not want to have your SSH key stored on more machines than necessary, from a security perspective.
SSH Agent Forwarding
If you have a Linux or Mac based desktop, you can instead forward your SSH credentials from your desktop through your SSH connection to test-vminstance-1. This is done using ssh-agent. The exact process may vary depending on your specific operating system; the steps outlined below are for setting this up under Ubuntu 16.04.
...
4) If ForwardAgent hasn't already been enabled for SSH on your desktop (normally this would be found in /etc/ssh/ssh_config if it is), you will need to enable it locally for your account in ~/.ssh/config (create the file if it doesn't already exist). While you can enable it for all systems you SSH to, ideally you should restrict it to test-vminstance-1:
Code Block |
---|
phi216@shinobu-kf:~$ cat ~/.ssh/config Host 146.118.113.9 ForwardAgent yes |
...
5) Now when you SSH to test-vminstance-1, not only will it cache your passphrase for your SSH key (if it has one), but you will also be able to SSH from test-vminstance-1 to any other VM instance in your private cluster as if your SSH credentials were stored locally on that VMinstance:
Code Block |
---|
phi216@shinobu-kf:~$ ssh ubuntu@146.118.113.9 ---8<--- ubuntu@test-vminstance-1:~$ ssh 192.168.1.56 |
Hosts File
It is probably also worth adding all of the VMs instances in your private cluster to /etc/hosts on test-vminstance-1. That way, connecting to them will be simpler. Just add the entries to the end of your hosts file:
Code Block |
---|
ubuntu@test-vminstance-1:~$ grep 192.168 /etc/hosts 192.168.1.52 test-vminstance-1 192.168.1.56 test-vminstance-2 192.168.1.58 test-vminstance-3 192.168.1.53 test-vminstance-4 192.168.1.62 test-vminstance-5 |
You can test this easily:
Code Block |
---|
ubuntu@test-vminstance-1:~$ for i in {2..5}; do ssh test-vminstance-$i 'ip a | grep 192.168'; done inet 192.168.1.56/24 brd 192.168.1.255 scope global ens3 inet 192.168.1.58/24 brd 192.168.1.255 scope global ens3 inet 192.168.1.53/24 brd 192.168.1.255 scope global ens3 inet 192.168.1.62/24 brd 192.168.1.255 scope global ens3 |
Using pdsh
When running commands across all VMs instances in the cluster, there are a number of tools that allow you to run parallel commands. One of the simplest ones is pdsh, which you only need to install on test-vminstance-1:
Code Block |
---|
ubuntu@test-vminstance-1:~$ sudo apt-get install pdsh Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: genders libgenders0 libltdl7 Suggested packages: rdist The following NEW packages will be installed: genders libgenders0 libltdl7 pdsh 0 upgraded, 4 newly installed, 0 to remove and 37 not upgraded. Need to get 207 kB of archives. After this operation, 586 kB of additional disk space will be used. |
...
Once installed, you should configure pdsh to use SSH by default for rcmd, done by creating the file /etc/pdsh/rcmd_default (this will need to be done as root) which only contains "ssh":
Code Block |
---|
ubuntu@test-vminstance-1:~$ cat /etc/pdsh/rcmd_default ssh |
...
You should also create a genders file (again as root) that contains the names of all of the other VMs instances in the cluster:
Code Block |
---|
ubuntu@test-vminstance-1:~$ cat /etc/genders test-vminstance-2 test-vminstance-3 test-vminstance-4 test-vminstance-5 |
Now you can run "pdsh -a" to run a specific command across all VMs instances simultaneously:
Code Block |
---|
ubuntu@test-vminstance-1:~$ pdsh -a 'ip a | grep 192.168' test-vminstance-4: inet 192.168.1.53/24 brd 192.168.1.255 scope global ens3 test-vminstance-2: inet 192.168.1.56/24 brd 192.168.1.255 scope global ens3 test-vminstance-3: inet 192.168.1.58/24 brd 192.168.1.255 scope global ens3 test-vminstance-5: inet 192.168.1.62/24 brd 192.168.1.255 scope global ens3 |
...