This blog post will take you through installing and configuring a high-availability K3s Kubernetes 5-node cluster with three servers (master) and two agents (workers). The setup also has a Nginx load balancer in front of three servers to provide high availability.
Here’s the architecture overview of the high availability K3s Kubernetes cluster.
Install Nginx
I use Proxmox to virtualize the containers and virtual machines in my lab. I used the LXC container for the Nginx Load Balancer setup for this particular setup.
The first part, the creation of the container, is pretty straightforward. I allocated 1 CPU, 1024GiB of RAM and 8 GiB of disk space for my container. I also used the Ubuntu server 20.04 version.
After setup and ssh’ing to the terminal, I run standard updates.
|
|
Next, install Nginx.
|
|
Once installed, start the Nginx service and enable it to start at system reboot.
|
|
Verify the status of the Nginx service using the following command.
|
|
Set up an Nginx Load Balancer
Rename the Nginx default configuration file and create a new load balancer configuration file.
|
|
Add the following config lines.
|
|
Verify the Nginx for any syntax error.
|
|
If the configuration is correct, the output should be like this.
|
|
Restart the Nginx service to apply the changes.
|
|
Run the following command to display all TCP sockets currently listening and the process that uses each socket. Check that Nginx is listening on port 6443.
|
|
Here’s an example output of the ss -antpl
command.
|
|
Install K3s on servers
A cluster datastore must be used for a high-availability K3s installation. In my setup, I’m using embedded etcd. There are other options, like MySQL, MariaDB and PostgreSQL. More information at https://docs.k3s.io/datastore.
You should remember to have an odd number of server nodes for etcd to maintain quorum. More information can be found in the official documentation: https://docs.k3s.io/datastore/ha-embedded.
In my lab, I was using three VMs for server nodes. I gave 2CPU, 2GiB RAM and 32GiB disk for each VM in my Proxmox environment.
You’ll need to create a SECRET
that needs to be updated in the command below. You also need to update load_balancer_ip_or_hostname
with your Nginx IP address.
Run the following command on the first server. It will finish in 10-15 seconds.
|
|
Verify setup by running the following command.
|
|
The output should be like this. Notice that in addition to the control-plane
and master
roles, there’s an etcd
role.
|
|
Get the token from the first server. You’ll be using it as SECRET
to join the second server to the cluster.
|
|
After launching the first server, join the second and third servers to the cluster using the shared secret.
Don’t forget to update
SECRET
with a token from the first node (see above),ip or hostname of server1
andload_balancer_ip_or_hostname
with your values.
|
|
Verify that the second node joined the cluster.
|
|
You should see a similar output to mine that I’ve got in my lab.
|
|
Repeat the step you did on the second server but on your third server. Use command as-is, i.e. do not change any parameters.
Verify that the second node joined the cluster, and you should see three nodes as a result.
|
|
Install K3s on agents
Get a token from any of the three servers in the cluster. The token will be the same on all servers.
|
|
Run the following command on the first agent VM, replacing SECRET
with the token acquired from the server.
|
|
Verify that the first agent joined the cluster.
|
|
Repeat the same command used on the first agent on any other agents you want to join the cluster.
Verify that agents joined the cluster.
|
|
Configuring Cluster Access
I usually use my regular development computer to do all the admin and coding on the Kubernetes cluster. Here are the configuration steps to access a newly installed K3s cluster locally without ssh’ing to cluster nodes.
Install kubectl
First, you need to install kubectl
on your local machine.
|
|
Next, install kubectl
.
|
|
Verify the installation.
|
|
Copy Cluster Configuration
Copy the /etc/rancher/k3s/k3s.yaml
file context to the clipboard. You can find it on any of the three cluster nodes we created above.
Check if you have a folder ~/.kube/
on the local machine. Create it if required. After that, create the ~/.kube/config
file and paste the file context from the clipboard to the new file.
|
|
In the configuration file, update server
with your load balancer IP. Save and close the file.
![[Pasted image 20231017103419.png]]
To ensure that kubectl
is working correctly from your local machine, you can check by running the following command.
|
|
The output should be similar to the one I got in my lab.
|
|
In addition, I like to an assign alias to kubectl
, so it’s less typing for me.
|
|
Verify the result by running the following command. You should receive the same output mentioned above.
|
|
Optional: Enable kubectl auto-completion
I’m using bash
, but you can check zsh
, fish
and other shells configurations at the official page at https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#optional-kubectl-configurations-and-plugins.
First, install bash-completion
.
|
|
Verify that it’s installed. It should return the configuration file context.
|
|
Next, enable it system-wide.
|
|
Since I’m using an alias for my kubectl
, I also executed the following commands to include an alias to auto-completion.
|
|
After that, you must restart your shell to enable auto-completion. Alternatively, you can run the following command:
|
|
Conclusion
Setting up a high-availability K3s Kubernetes cluster in your home lab is a simple process that can be quickly done.