This blog post will take you through installing and configuring a high-availability K3s Kubernetes 5-node cluster with three servers (master) and two agents (workers). The setup also has a Nginx load balancer in front of three servers to provide high availability.

Here’s the architecture overview of the high availability K3s Kubernetes cluster.

K3s HA Architecture · Image source: https://docs.k3s.io/architecture

K3s HA Architecture · Image source: https://docs.k3s.io/architecture

Install Nginx

I use Proxmox to virtualize the containers and virtual machines in my lab. I used the LXC container for the Nginx Load Balancer setup for this particular setup.

The first part, the creation of the container, is pretty straightforward. I allocated 1 CPU, 1024GiB of RAM and 8 GiB of disk space for my container. I also used the Ubuntu server 20.04 version.

After setup and ssh’ing to the terminal, I run standard updates.

1
sudo apt update && sudo apt upgrade

Next, install Nginx.

1
apt-get install nginx -y

Once installed, start the Nginx service and enable it to start at system reboot.

1
2
systemctl start nginx
systemctl enable nginx

Verify the status of the Nginx service using the following command.

1
systemctl status nginx

Set up an Nginx Load Balancer

Rename the Nginx default configuration file and create a new load balancer configuration file.

1
2
3
mv /etc/nginx/sites-enabled/default /etc/nginx/sites-enabled/default.old
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.old
nano /etc/nginx/nginx.conf

Add the following config lines.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Uncomment this next line if you are NOT running nginx in docker
# By loading the ngx_stream_module module, Nginx can handle TCP/UDP traffic in addition to HTTP traffic
load_module /usr/lib/nginx/modules/ngx_stream_module.so;
events {}

stream {
  upstream k3s_servers {
    server <IP1>:6443; # Change to the IP of the K3s first master VM
    server <IP2>:6443; # Change to the IP of the K3s second master VM
    server <IP3>:6443; # Change to the IP of the K3s third master VM
  }

  server {
    listen 6443;
    proxy_pass k3s_servers;
  }
}

Verify the Nginx for any syntax error.

1
nginx -t

If the configuration is correct, the output should be like this.

1
2
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Restart the Nginx service to apply the changes.

1
systemctl restart nginx

Run the following command to display all TCP sockets currently listening and the process that uses each socket. Check that Nginx is listening on port 6443.

1
ss -antpl

Here’s an example output of the ss -antpl command.

1
2
3
4
5
6
State            Recv-Q           Send-Q                     Local Address:Port                     Peer Address:Port          Process                                                         
LISTEN           0                4096                       127.0.0.53%lo:53                            0.0.0.0:*              users:(("systemd-resolve",pid=93,fd=13))                       
LISTEN           0                100                            127.0.0.1:25                            0.0.0.0:*              users:(("master",pid=276,fd=13))                               
LISTEN           0                511                              0.0.0.0:6443                          0.0.0.0:*              users:(("nginx",pid=470,fd=5),("nginx",pid=469,fd=5))          
LISTEN           0                100                                [::1]:25                               [::]:*              users:(("master",pid=276,fd=14))                               
LISTEN           0                4096                                   *:22                                  *:*              users:(("systemd",pid=1,fd=55))  

Install K3s on servers

A cluster datastore must be used for a high-availability K3s installation. In my setup, I’m using embedded etcd. There are other options, like MySQL, MariaDB and PostgreSQL. More information at https://docs.k3s.io/datastore.

You should remember to have an odd number of server nodes for etcd to maintain quorum. More information can be found in the official documentation: https://docs.k3s.io/datastore/ha-embedded.

In my lab, I was using three VMs for server nodes. I gave 2CPU, 2GiB RAM and 32GiB disk for each VM in my Proxmox environment.

You’ll need to create a SECRET that needs to be updated in the command below. You also need to update load_balancer_ip_or_hostname with your Nginx IP address.

Run the following command on the first server. It will finish in 10-15 seconds.

1
2
3
4
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
	--node-taint CriticalAddonsOnly=true:NoExecute \
    --cluster-init \
    --tls-san=load_balancer_ip_or_hostname

Verify setup by running the following command.

1
sudo k3s kubectl get nodes

The output should be like this. Notice that in addition to the control-plane and master roles, there’s an etcd role.

1
2
NAME      STATUS   ROLES                       AGE   VERSION
cube-01   Ready    control-plane,etcd,master   42s   v1.27.6+k3s1

Get the token from the first server. You’ll be using it as SECRET to join the second server to the cluster.

1
sudo cat /var/lib/rancher/k3s/server/node-token

After launching the first server, join the second and third servers to the cluster using the shared secret.

Don’t forget to update SECRET with a token from the first node (see above), ip or hostname of server1 and load_balancer_ip_or_hostname with your values.

1
2
3
4
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
	--node-taint CriticalAddonsOnly=true:NoExecute \
    --server https://<ip or hostname of server1>:6443 \
    --tls-san=load_balancer_ip_or_hostname

Verify that the second node joined the cluster.

1
sudo k3s kubectl get node

You should see a similar output to mine that I’ve got in my lab.

1
2
3
NAME      STATUS   ROLES                       AGE   VERSION
cube-01   Ready    control-plane,etcd,master   11m   v1.27.6+k3s1
cube-02   Ready    control-plane,etcd,master   12s   v1.27.6+k3s1

Repeat the step you did on the second server but on your third server. Use command as-is, i.e. do not change any parameters.

Verify that the second node joined the cluster, and you should see three nodes as a result.

1
2
3
4
NAME      STATUS   ROLES                       AGE   VERSION
cube-01   Ready    control-plane,etcd,master   32m   v1.27.6+k3s1
cube-02   Ready    control-plane,etcd,master   21m   v1.27.6+k3s1
cube-03   Ready    control-plane,etcd,master   38s   v1.27.6+k3s1

Install K3s on agents

Get a token from any of the three servers in the cluster. The token will be the same on all servers.

1
sudo cat /var/lib/rancher/k3s/server/node-token

Run the following command on the first agent VM, replacing SECRET with the token acquired from the server.

1
curl -sfL https://get.k3s.io | K3S_URL=https://<load_balancer_ip_or_hostname>:6443 K3S_TOKEN=SECRET sh -

Verify that the first agent joined the cluster.

1
2
3
4
5
NAME       STATUS   ROLES                       AGE   VERSION
cube-01    Ready    control-plane,etcd,master   57m   v1.27.6+k3s1
cube-02    Ready    control-plane,etcd,master   45m   v1.27.6+k3s1
cube-03    Ready    control-plane,etcd,master   25m   v1.27.6+k3s1
drone-01   Ready    <none>                      16s   v1.27.6+k3s1

Repeat the same command used on the first agent on any other agents you want to join the cluster.

Verify that agents joined the cluster.

1
2
3
4
5
6
NAME       STATUS   ROLES                       AGE     VERSION
cube-01    Ready    control-plane,etcd,master   61m     v1.27.6+k3s1
cube-02    Ready    control-plane,etcd,master   50m     v1.27.6+k3s1
cube-03    Ready    control-plane,etcd,master   29m     v1.27.6+k3s1
drone-01   Ready    <none>                      4m22s   v1.27.6+k3s1
drone-02   Ready    <none>                      5s      v1.27.6+k3s1

Configuring Cluster Access

I usually use my regular development computer to do all the admin and coding on the Kubernetes cluster. Here are the configuration steps to access a newly installed K3s cluster locally without ssh’ing to cluster nodes.

Install kubectl

First, you need to install kubectl on your local machine.

1
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Next, install kubectl.

1
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

Verify the installation.

1
kubectl version --client

Copy Cluster Configuration

Copy the /etc/rancher/k3s/k3s.yaml file context to the clipboard. You can find it on any of the three cluster nodes we created above.

Check if you have a folder ~/.kube/ on the local machine. Create it if required. After that, create the ~/.kube/config file and paste the file context from the clipboard to the new file.

1
nano ~/.kube/config

In the configuration file, update server with your load balancer IP. Save and close the file.
![[Pasted image 20231017103419.png]]

To ensure that kubectl is working correctly from your local machine, you can check by running the following command.

1
kubectl get nodes

The output should be similar to the one I got in my lab.

1
2
3
4
5
6
NAME       STATUS   ROLES                       AGE   VERSION
cube-01    Ready    control-plane,etcd,master   89m   v1.27.6+k3s1
cube-02    Ready    control-plane,etcd,master   77m   v1.27.6+k3s1
cube-03    Ready    control-plane,etcd,master   57m   v1.27.6+k3s1
drone-01   Ready    <none>                      32m   v1.27.6+k3s1
drone-02   Ready    <none>                      28m   v1.27.6+k3s1

In addition, I like to an assign alias to kubectl, so it’s less typing for me.

1
alias k=kubectl

Verify the result by running the following command. You should receive the same output mentioned above.

1
k get nodes

Optional: Enable kubectl auto-completion

I’m using bash, but you can check zsh, fish and other shells configurations at the official page at https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#optional-kubectl-configurations-and-plugins.

First, install bash-completion.

1
apt-get install bash-completion

Verify that it’s installed. It should return the configuration file context.

1
type _init_completion

Next, enable it system-wide.

1
2
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
sudo chmod a+r /etc/bash_completion.d/kubectl

Since I’m using an alias for my kubectl, I also executed the following commands to include an alias to auto-completion.

1
2
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc

After that, you must restart your shell to enable auto-completion. Alternatively, you can run the following command:

1
source ~/.bashrc

Conclusion

Setting up a high-availability K3s Kubernetes cluster in your home lab is a simple process that can be quickly done.