k3s - Kubernetes in 10 minutes - An Introduction

blog preview

Were you ever in a situation when you needed a quickly accessible kubernetes cluster? Because, maybe, you are a Data Engineer wanting to quickly deploy an application to test the latest in dashboarding? Or you want to quickly run a machine learning pipeline?

While Kubernetes is an extremely powerful platform, it can be quite complex and challenging to get started with. This is where k3s comes in. k3s is a light-weight Kubernetes distribution that is designed to be easy to install and operate. It is perfect for running Kubernetes on edge devices and small clusters. In this blog post, we will give you a quick introduction to k3s and show you how to get started with it.

Why k3s

K3s is one of the more lightweight kubernetes distributions out there, created by the well-known kubernetes experts at Rancer. As this guide will show, running kubernetes by using k3s is a real blast - and on top of the little complexity, k3s is CNCF-certified and marked as production-ready! To summarize some of the more prominent arguments:

  • k3s is CNCF-certified
  • k3s is a CNCF Sandbox project (the first k8s distribution to get this state - by the way)
  • Several million downloads
  • 21k github stars
  • Designed for production workloads (according to rancher)
  • No external dependencies - it can run on any Linux box
  • You might use storage backends (like SQLITE) other than etcd

General Architecture

k3s comprises two architectural parts - the server and the agent. The server process runs the kubernetes API, controller, scheduler and the storage backend (sqlite by default).

The agent runs the kubelet, flannel, containerd, load balancers, network policy controllers and the kube-proxy.

The server furthermore uses a handy reverse tunnel proxy - allowing for one-way communication between k3s server and k3s agent. This helps to reduce network complexity as you do not need to open firewall ports on the agent-side!

k3s agent and server overviewk3s agent and server overview (source: https://www.suse.com/c/rancher_blog/introduction-to-k3s)

Deployment Options

k3s comes with 4 different deployment options:

  1. Single node: With this method, k3s get's installed a single server with the agent process and the default storage - SQLITE - embedded. They highly optimized this method for edge devices as well as quick installations of development clusters, short-lived Data Science clusters or CI integration clusters.

    k3s single node deploymentk3s single node deployment

  2. Single server, multiple agents: Here we still only have one server, but one might add multiple agents - which are nothing less than workers - to a cluster. The agents connect to the server using a token issued by the server during startup. This method is still very lightweight and suitable for edge setups, a little more advanced Data Science workloads and CI integrations.

    k3s single server multiple agents deploymentk3s single server multiple agents deployment

  3. High availability setup with external storage backend: This setup now is a truly high-availability set-up - having multiple servers and multiple agents - any one or multiple of them can fail and the cluster is still operating. This setup uses an external database like PostgreSQL for storage. Why not SQLITE anymore? Because of the multiple servers, we need a server to manage concurrent access.

    The big advantage of this method compared to the next one is, that the leader election/HA setup is not based on a quorum-based algorithm. So we can have any number of nodes!

    k3s high available multi-server deployment with external databasek3s high available multi-server deployment with external database

  4. High availability setup with embedded storage: This installation again utilizes multiple servers in a HA setup - however having etcd as embedded storage as part of each server. The enormous advantage: This is really easy to set up and provides high availability. The big disadvantage? The leader election is quorum-based, so we need 3 servers to maintain quorum.

    Because of the simple installation process and the very reduced complexity, this setup is best suited for more elaborate edge installations with multiple server nodes.

    k3s high available multi-server deployment with embedded storagek3s high available multi-server deployment with embedded storage

Installation of Single Node Deployment

As the premise of this guide is to set up a cluster for quick Data Analytics tasks or development workflows, let's evaluate, how to quickly install k3s in the single-node configuration on one of our servers.

Minimum requirements

The requirements for installing k3s are rather low:

  • If you are using Raspbian Buster, follow these steps to switch to legacy iptables.
  • If you are using Alpine Linux, follow these steps for additional setup.
  • If you are using (Red Hat/CentOS) Enterprise Linux, follow these steps for additional setup.
  • Hardware:
    • RAM: 512MB
    • CPU: Pentium 4, 1.5GHz...

Installation options

The easiest method to bootstrap a single-node k3s installation is to use the RANCHER-maintained install-script - found at the k3s weg page. This script can be used to install k3s on systemd or openrc based systems. For other options, please visit: https://rancher.com/docs/k3s/latest/en/installation/

IMPORTANT: As always when running scripts from the internet - make sure to vet the script before running.

On your server where you want to run k3s, run

1curl -sfL https://get.k3s.io | sh -

Surprisingly - that's it. k3s is already installed and

  • configured to automatically restart after node reboots
  • with kubectl, crictl, ctr, k3s-killall.sh and k3s-uninstall.sh utilities installed
  • with a kubeconfig file written to /etc/rancher/k3s/k3s.yaml

To connect to the cluster, we need to copy the k3s.yaml kubernetes context information to our kubeconfig file.

If you don't have a file $HOME/.kube/config, simply copy the rancher-file, set permissions right and rename it accordingly:

1mkdir ~/.kube
2sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
3sudo chown $USER $HOME/.kube/config
4sudo chmod 600 $HOME/.kube/config
5export KUBECONFIG=$HOME/.kube/config # This tells kubectl which config-file to use

Also add the line export KUBECONFIG=$HOME/.kube/config to your ~/.bashrc or ~/.zshrc file to have it persisted across session restarts.

Alternative: If you already have a kubectl context configuration file ($HOME/.kube/config), you need to merge the two config files. This can be done by some bash magic:

First, create a copy of the k3s.yaml - file in your KUBECONFIG-folder

1sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config_k3s
2sudo chown $USER $HOME/.kube/config_k3s

Then, open the file and change the highlighted lines below:

1apiVersion: v1
3- cluster:
4 certificate-authority-data: <secret>
5 server:
6 name: k3s_cluster
8- context:
9 cluster: k3s_cluster
10 user: k3s_user
11 name: k3s_context
12current-context: k3s_context
13kind: Config
14preferences: {}
16- name: k3s_user
17 user:
18 client-certificate-data: <secret>
19 client-key-data: <secret>

Finally, merge the two files into the final $HOME/.kube/config

1# Make a copy of your existing config
2cp $HOME/.kube/config $HOME/.kube/config.bak
4# Merge the two config files together into a new config file
5KUBECONFIG=$HOME/.kube/config:$HOME/.kube/config_k3s kubectl config view --flatten > /tmp/config
7# Replace your old config with the new merged config
8mv /tmp/config $HOME/.kube/config
10# (optional) Delete the backup once you confirm everything worked ok
11rm $HOME/.kube/config.bak
13# Set the new k3s_context as your current context
14kubectl config use-context k3s_context
16# Verify that the context is installed
17kubectl config get-contexts

The output should look like:

2* default default default
3 k3s_context k3s_cluster k3s_user

Congratulations: You are ready to access the cluster

To access the cluster, use kubectl:

1kubectl get pods --all-namespaces
3# result
5kube-system local-path-provisioner-7b7dc8d6f5-jzkdx 1/1 Running 0 81m
6kube-system coredns-b96499967-9rwwm 1/1 Running 0 81m
7kube-system helm-install-traefik-crd-hczz4 0/1 Completed 0 81m
8kube-system metrics-server-668d979685-dx6c6 1/1 Running 0 81m
9kube-system helm-install-traefik-46tgr 0/1 Completed 2 81m
10kube-system svclb-traefik-aa7136cb-td8kt 2/2 Running 0 80m
11kube-system traefik-7cd4fcff68-w8hdr 1/1 Running 0 80m

Access your cluster remotely

Up until know we only accessed our cluster from the server where it was installed on. To connect the cluster remotely, simply create a kube/config - file on your local machine - similar as you did on the server.

  1. Create your local $HOME/.kube/config file - with the same strategy as outlined above and as you did on the server. Simply copy the /etc/rancher/k3s/k3s.yaml file from your server to your local machine.

  2. Open the $HOME/.kube/config file and change the server address from to your remote cluster IP-Address (or DNS).

    1apiVersion: v1
    3- cluster:
    4 certificate-authority-data: <secret>
    5 server: https://<your-server-ip-or-dns>:6443
    6name: k3s_cluster
  3. Run

    1kubectl config use k3s_context
  4. Now you can use kubectl on your local machine to remotely connect to your cluster

Conclusion / TLDR

We have seen that we can install k3s with some simple steps. This allows us to bootstrap a simple kubernetes cluster for various workloads:

  • Running adhoc machine learning pipelines
  • Testing out this new shiny tool which comes easily bundled as kubernetes application
  • Deploying kubernetes on a small edge network
  • Run kubernetes on our local developer machines for testing purposes - without having to worry about virtual machines

The steps to install and run k3s are.

1# installs and runs k3s
2curl -sfL https://get.k3s.io | sh -
4# Creates a kubec-config context to connect to the cluster
5# Use the alternative-method described above, if you already have such a context-file
6mkdir ~/.kube
7sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
8sudo chown $USER $HOME/.kube/config
9sudo chmod 600 $HOME/.kube/config
10export KUBECONFIG=$HOME/.kube/config # This tells kubectl which config-file to use
11kubectl get pods --all-namespaces


Interested in how to train your very own Large Language Model?

We prepared a well-researched guide for how to use the latest advancements in Open Source technology to fine-tune your own LLM. This has many advantages like:

  • Cost control
  • Data privacy
  • Excellent performance - adjusted specifically for your intended use

Need assistance?

Do you have any questions about the topic presented here? Or do you need someone to assist in implementing these areas? Do not hesitate to contact me.