Run Kubernetes on a Raspberry Pi with k3s

Create your own three-node Kubernetes cluster with these easy-to-follow instructions.
210 readers like this.
How Kubernetes became the solution for migrating legacy applications

Opensource.com

For a long time, I've been interested in building a Kubernetes cluster out of a stack of inexpensive Raspberry Pis. Following along with various tutorials on the web, I was able to get Kubernetes installed and working in a three Pi cluster. However, the RAM and CPU requirements on the master node overwhelmed my Pi. This caused poor performance when doing various Kubernetes tasks. It also made an in-place upgrade of Kubernetes impossible.

As a result, I was very excited to see the k3s project. K3s is billed as a lightweight Kubernetes for use in resource-constrained environments. It is also optimized for ARM processors. This makes running a Raspberry Pi-based Kubernetes cluster much more feasible. In fact, we are going to create one in this article.

Materials needed

To create the Kubernetes cluster described in this article, we are going to need:

  • At least one Raspberry Pi (with SD card and power adapter)
  • Ethernet cables
  • A switch or router to connect all our Pis together

We will be installing k3s from the internet, so they will need to be able to access the internet through the router.

An overview of our cluster

For this cluster, we are going to use three Raspberry Pis. The first we'll name kmaster and assign a static IP of 192.168.0.50 (since our local network is 192.168.0.0/24). The first worker node (the second Pi), we'll name knode1 and assign an IP of 192.168.0.51. The final worker node we'll name knode2 and assign an IP of 192.168.0.52.

Obviously, if you have a different network layout, you may use any network/IPs you have available. Just substitute your own values anywhere IPs are used in this article.

So that we don't have to keep referring to each node by IP, let's add their host names to our /etc/hosts file on our PC.

echo -e "192.168.0.50\tkmaster" | sudo tee -a /etc/hosts
echo -e "192.168.0.51\tknode1" | sudo tee -a /etc/hosts
echo -e "192.168.0.52\tknode2" | sudo tee -a /etc/hosts

Installing the master node

Now we're ready to install the master node. The first step is to install the latest Raspbian image. I am not going to explain that here, but I have a detailed article on how to do this if you need it. So please go install Raspbian, enable the SSH server, set the hostname to kmaster, and assign a static IP of 192.168.0.50.

Now that Raspbian is installed on the master node, let's boot our master Pi and ssh into it:

ssh pi@kmaster

Now we're ready to install k3s. On the master Pi, run:

curl -sfL https://get.k3s.io | sh -

When the command finishes, we already have a single node cluster set up and running! Let's check it out. Still on the Pi, run:

sudo kubectl get nodes

You should see something similar to:

NAME     STATUS   ROLES    AGE    VERSION
kmaster  Ready    master   2m13s  v1.14.3-k3s.1

Extracting the join token

We want to add a couple of worker nodes. When installing k3s on those nodes we will need a join token. The join token exists on the master node's filesystem. Let's copy that and save it somewhere we can get to it later:

sudo cat /var/lib/rancher/k3s/server/node-token

Installing the worker nodes

Grab some SD cards for the two worker nodes and install Raspbian on each. For one, set the hostname to knode1 and assign an IP of 192.168.0.51. For the other, set the hostname to knode2 and assign an IP of 192.168.0.52. Now, let's install k3s.

Boot your first worker node and ssh into it:

ssh pi@knode1

On the Pi, we'll install k3s as before, but we will give the installer extra parameters to let it know that we are installing a worker node and that we'd like to join the existing cluster:

curl -sfL http://get.k3s.io | K3S_URL=https://192.168.0.50:6443 \
K3S_TOKEN=join_token_we_copied_earlier sh -

Replace join_token_we_copied_earlier with the token from the "Extracting the join token" section. Repeat these steps for knode2.

Access the cluster from our PC

It'd be annoying to have to ssh to the master node to run kubectl anytime we wanted to inspect or modify our cluster. So, we want to put kubectl on our PC. But first, let's get the configuration information we need from our master node. Ssh into kmaster and run:

sudo cat /etc/rancher/k3s/k3s.yaml

Copy this configuration information and return to your PC. Make a directory for the config:

mkdir ~/.kube

Save the copied configuration as ~/.kube/config. Now edit the file and change the line:

server: https://localhost:6443

to be:

server: https://kmaster:6443

For security purpose, limit the file's read/write permissions to just yourself:

chmod 600 ~/.kube/config

Now let's install kubectl on our PC (if you don't already have it). The Kubernetes site has instructions for doing this for various platforms. Since I'm running Linux Mint, an Ubuntu derivative, I'll show the Ubuntu instructions here:

sudo apt update && sudo apt install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt update && sudo apt install kubectl

If you're not familiar, the above commands add a Debian repository for Kubernetes, grab its GPG key for security, and then update the list of packages and install kubectl. Now, we'll get notifications of any updates for kubectl through the standard software update mechanism.

Now we can check out our cluster from our PC! Run:

kubectl get nodes

You should see something like:

NAME     STATUS  ROLES   AGE   VERSION
kmaster  Ready   master  12m   v1.14.3-k3s.1
knode1   Ready   worker  103s  v1.14.3-k3s.1
knode1   Ready   worker  103s  v1.14.3-k3s.1

Congratulations! You have a working 3-node Kubernetes cluster!

The k3s bonus

If you run kubectl get pods --all-namespaces, you will see some extra pods for Traefik. Traefik is a reverse proxy and load balancer that we can use to direct traffic into our cluster from a single entry point. Kubernetes allows for this but doesn't provide such a service directly. Having Traefik installed by default is a nice touch by Rancher Labs. This makes a default k3s install fully complete and immediately usable!

We're going to explore using Traefik through Kubernetes ingress rules and deploy all kinds of goodies to our cluster in future articles. Stay tuned!

What to read next
User profile image.
Lee is a Christ-follower, husband, and father first, software engineer by trade, and a tinkerer/maker at heart.

10 Comments

K3s is nice thing, I have on couple small VPS. Now I need to find out how to integrate it with Let's Encrypt - do you know maybe such solution?

Yes! Keep an eye out here for my article on the subject coming out March 12th!

Hey

Thank you for this post. this is amazing.

Great information about Kubernetes.

So far I have yet to successfully connect a worker node with the master. Tried a while back with Pi 2s and now using Pi 4s. No luck with either one although I was able to resolve an initial problem on the Pi 4s that had had me stumped on the Pi 2s.

I wish I could figure out why! Any idea where I can go to start a conversation with some knowledgeable people!

P.S. Thanks for letting me know that it's really possible! All of the leads that I have found so far were to install older versions of k3s. :(

@sgtrock, that seems odd. Send me a message on twitter (@elcarpie) if you have a twitter account, or reply with specifics in the comments of the YouTube video and I'll see if I can help.

In reply to by sgtrock

I tried to run this on 3× Rock64 1GB, but in HA mode with embedded DB (experimental).
Unfortunately that took 30% CPU and 50% RAM so I had to quickly bail.

Maybe someday.

I might have missed it ... can you indicate what rev of Pi you used for the initial setup? And what node roles need more power .. do I assume the worker nodes are the ones that could use more ram/cpu power?

Thanks!

Initially I was using Raspberry Pi 3Bs with 1GB of RAM. The worker nodes were fine. The master node was the one that suffered. I was using a standard bare metal Kubernetes install on those. I had things "working" but deploys and sometimes responses to kubectl would be incredibly slow and sometimes the deploys wouldn't work at all. I have a screenshot of where I ssh'd in to the master node. It had an uptime of 23 days, 913M of 969M memory used, and a load average of 117.08 92.24 73.08. :o I think the RAM issue was probably the source of the problems. A Raspberry Pi 4 setup with 4G RAM might be ok, but I haven't tried it. k3s has been working great on the 3Bs however.

In reply to by JamesF

I am pleased to report that I have been attempting to find a way to use some spare RPi-3Bs build my first cluster. For most of a week I have followed one rabbit hole after another without success. This thread fix my problem. I followed the instructions step-by-step and sued my x86 laptop with Raspbian for remote through ssh.

My rig contains one RPi-3B+ as Master booting from WD 314GB PiDrive running Buster Standard plus four RPi-3B clients booting from Samsung EVO 32 GB SD Cards running Buster Lite. I took my time, paid special to manual edits and corrected typos, I even took a lunch break! But still, I completed these steps in just a few hours. Why didn't your hit come up last TUESDAY!! :)

All good now. Thank you SO MUCH. Now I can get on with learning instead of installing every variant of unsuccessful and incomplete steps found at other sites

OH, did I mention...THANK YOU?

I'll have more of that RPi-in-the-Sky, Please.
PiRexTech

In reply to by carpie

This is by far the fastest and easiest option I have tried.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.