Easy Raspberry PI Cluster Setup with Cloudmesh from MacOS

Set up many SD Cards directly from MacS that are preconfigured to create PI clusters.
By Gregor von Laszewski (laszewski@gmail.comlaszewski.github.io), Richard Otten, Anthony Orlowski Adam … | 2021.02.23

In this tutorial, we explain how to easily set up a cluster of Pis on a Mac while burning preconfigured SD Cards. We assume you use an SD Card reader/writer that is plugged into your manager PI that we configure initially with Pi Imager.

Learning Objectives

Topics covered

1. Introduction

Over time we have seen many efforts to create Clusters using Pi’s as a platform. There are many reasons for this. You have full control over the PIs, you use an inexpensive platform, and you use a highly usable platform and provides an enormous benefit to get educated about cluster computing in general.

There are different methods of how to set up a cluster. This includes setups that are known under the terms headlessnetwork booting, and booting from SD Cards. Each of the methods has its advantages and disadvantages. However, the last method is most familiar to the users in the Pi community that come from single Pis. While reviewing the many efforts that describe a cluster set up most of them contain many complex steps that require a significant amount of time as they are executed individually on these Pis. Even starting is non-trivial as a network needs to be set up to access them.

Despite the much improved Pi imager and the availability of Pi bakery, the process is still involved. So we started asking:

Is it possible to develop a tool that is specifically targeted to burn SDCards for the PIs in a cluster one at a time, so we can just plug the cards in, and with minimal effort start the cluster that simply works?

You are in luck, we have spent some time to develop such a tool and present it at as part of PiPlanet1 . No more spending hours upon hours to replicate the steps, learn complex DevOps tutorials, but instead get a cluster set up easily with just a few commands.

For this, we developed cms burn which is a program that you can execute either on a “manager” Pi (or in a Linux or macOS computers) to burn cards for your cluster.

We have set up on GitHub a comprehensive package that can be installed easily we hope that it is useful to you. All of this is discussed in detail at the cloudmesh-pi-burn README2. There you can also find detailed instructions on how to use a Mac or Linux computer to burn directly from them. To showcase to you how easy it is to use we demonstrate here the setup of a cluster with five nodes.

2. Requirements

For a list of possible part choices, please see:

3. The Goal

We will be creating the following setup using 5 Raspberry Pis (you need a minimum of 2, but our method works also for larger numbers of PIs). Consequentially, you will also need 5 SD cards for each of the 5 Pis. You will also want a network switch (managed or unmanaged) with 5 ethernet cables (one for each Pi).

Figure 1 shows our network configuration. From the five Raspberry Pis, one is dedicated as a manager and four as workers. We use WiFi between the manager PI to allow for you to set it up anywhere in your house or dorm (other configurations are discussed in the README).

We use an unmanaged network switch, where the manager and workers can communicate locally with each other, and the manager provides internet access to the workers via a bridge that we configure for you.

Figure 1: Pi Cluster setup with bridge network

4. Set up the Cloudmesh burn program on your Mac

On your Mac do the following. First set up a Python venv:

user@mac $ python3 -m venv ~/ENV3
user@mac $ source ~/ENV3/bin/activate

Next, install the cloudmesh cluster generation tools and start the burn process

(ENV3) user@mac $ pip install cloudmesh-pi-cluster
(ENV3) user@mac $ cms help
(ENV3) user@mac $ cms burn info 
(ENV3) user@mac $ cms burn raspberry "red,red0[1-4]" --device=/path/to/dev -f

Fill out the passwords and plug in the cards as requested.

5. Start your Cluster and Configure it

After the burn is completed, plug them in your PIs and switch them on. On you Mac execute the ssh command to log into your manager we called red. Worker nodes have a number in them.

(ENV3) user@mac $ ssh pi@red.local  

This will take a while as the file system on the SD Cards need to be installed and configurations such as country, ssh, and wifi need to be activated.

Once you are in the manager install cloudmesh cluster software also in it (we could have done this automatically, but decided to leave that part of the process up to you in case you to give you maximum flexibility).

pi@red:~ $ curl -Ls http://cloudmesh.github.io/get/pi | sh -

.. after lots of log messages, you will see …

#################################################
# Install Completed                             #
#################################################
Time to update and upgarde: 339 s
Time to install the venv:   22 s
Time to install cloudmesh:  185 s
Time for total install:     546 s
Time to install: 546 s
#################################################
Please activate with    source ~/ENV3/bin/activate    

Now just reboot with

pi@red:~ $ sudo reboot

6.Burn Verification and Post-Process Steps

After you boot, we recommend waiting 2-3 minutes for the boot process to complete.

6.1 Setting up a Proxy Jump with cms host

While we are waiting for the Pis to boot, we can set up proxy jump on our laptop/desktop while adding it to the ssh config file. This will make it easier to access our workers. Use the following command to set this up:

(ENV3) user@mac $ cms host config proxy pi@red.local "red0[1-4]"

It will do the appropriate modifications.

6.2 Verifying Manager and Worker Access

First verify that you can reach the manager (red).

(ENV3) user@mac $ ssh red
...
pi@red:~ $ exit

We can use a simple cms command to verify connection to our Pis. For this purpose, we use our build in temperature command that reads the temperature values from each of the Pis.

(ENV3) user@mac $ cms pi temp "red,red0[1-4]"
pi temp red,red0[1-4]
+--------+--------+-------+----------------------------+
| host   |    cpu |   gpu | date                       |
|--------+--------+-------+----------------------------|
| red    | 47.712 |  47.2 | 2021-03-27 19:52:56.674668 |
| red01  | 37.485 |  37.4 | 2021-03-27 19:52:57.333300 |
| red02  | 38.946 |  38.9 | 2021-03-27 19:52:57.303389 |
| red03  | 38.946 |  39.4 | 2021-03-27 19:52:57.440690 |
| red04  | 38.936 |  39.4 | 2021-03-27 19:52:57.550690 |
+--------+--------+-------+----------------------------+

By receiving this information from our devices we have confirmed our access.

6.3 Gather and Scatter Authorized Keys

Each of the nodes only has our laptop’s ssh-key in its respective authorized_keys file. We can use the cms command to gather all keys in our cluster and then distribute them so that each node can ssh into each other.

We first create ssh-keys for all the nodes in our cluster.

(ENV3) user@mac $ cms host key create "red,red0[1-4]"
host key create red,red0[1-4]
+-------+---------+--------------------------------------------------+
| host  | success | stdout                                           |
+-------+---------+--------------------------------------------------+
| red   | True    | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC99RR79UTQ |
|       |         | JznOPRN/FI6MqNsZ5Eirqs7JXA4UYfnMx8LVaD/ZcI6Sajr0 |
|       |         | 2nw2ucec8OMxtdzPrpWX5B+Hxz3GZWNKCbh2yHhMDIrf/Ohn |
|       |         | QGJrBx1mtAbqy4gd7qljEsL0FummdYmPdtHFBy7t2zkVp0J1 |
|       |         | V5YiLEbbrmX9RXQF1bJvHb4VNOOcwq47qX9h561q8qBxgQLz |
|       |         | F3iHmrMxmL8oma1RFVgZmjhnKMoXF+t13uZrf2R5/hVO4K6T |
|       |         | +PENSnjW7OX6aiIT8Ty1ga74FhXr9F5t14cofpN6QwuF2SqM |
|       |         | CgpVGfRSGMrLI/2pefszU2b5eeICWYePdopkslML+f+n     |
|       |         | pi@red                                           |
| red01 | True    | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRN/rGGF+e |
|       |         | dZ9S2IWX4P26F7T2H+nsbw7CfeJ6df9oX/npYuM9BPDzcEP7 |
|       |         | +2jNIZtZVehJj5zHiodjAspCxjg+mByQZye1jioa3MBmgO3j |
|       |         | VwxCOPA7x0Wc2Dm9/QOg1kMMMlnNT2NHs+SeQMDXoUkpaLCQ |
|       |         | 108VQxaTclJ67USC1e/9B7OtjhsVm5kGA7Eamyd5PgtANT7G |
|       |         | jHERXSnGzFnbqqsZEBvCLWLSbaSq3adWR1bVvDblu11nyitE |
|       |         | x7YKZA9RK0A7rTBzrZa70SfG65dSbpqQFxmQYSrgiwVSBokk |
|       |         | 0vvk5l7NhBDrrrxWYWd9dm/SrDrInjcuDsCsjOVuccx7     |
|       |         | pi@red01                                         |
... # Ommitted some output for brevity
+-------+---------+--------------------------------------------------+

We can subsequently gather these keys into a file.

(ENV3) user@mac $ cms host key gather "red,red0[1-4]" ~/.ssh/cluster_red_keys

And then Scatter them to the authorized_keys of our nodes.

(ENV3) user@mac$ cms host key scatter "red,red0[1-4]" ~/.ssh/cluster_red_keys
host key scatter red,red0[1-4] /Users/richie/.ssh/cluster_red_keys
+-------+---------+--------+
| host  | success | stdout |
+-------+---------+--------+
| red   | True    |        |
| red01 | True    |        |
| red02 | True    |        |
| red03 | True    |        |
| red04 | True    |        |
+-------+---------+--------+

All nodes should now have ssh access to each other.

7. More Information

As we use ssh keys to authenticate between manager and workers, you can directly log into the workers from the manager.

More details are provided on our web pages at

Other cloudmesh components are discussed in the cloudmesh manual4.

Acknowledgement

We would like to thank the following community members for testing the recent versions: Venkata Sai Dhakshesh Kolli, Rama Asuri, Adam Ratzman. Previous versions of the software obtained code contributions from Sub Raizada, Jonathan Branam, Fugang Wnag, Anand Sriramulu, Akshay Kowshik.

References

  1. PiPlanet Web Site, https://piplanet.org ↩︎
  2. Cloudmesh pi burn README, https://github.com/cloudmesh/cloudmesh-pi-burn/blob/main/README.md ↩︎
  3. Parts for building clusters, https://cloudmesh.github.io/pi/docs/hardware/parts/ ↩︎
  4. Cloudmesh Manual, https://cloudmesh.github.io/cloudmesh-manual/ ↩︎