Kubernetes is an open-source, robust and powerful container orchestration platform that is designed to automate deploying, scaling, as well as operating application containers. It provides a mechanism for automating over clusters of hosts the deployment, scaling, and operation of application containers.
Alright let’s dive into it.
Enabling IPv4 forwarding and allowing iptables to inspect bridged traffic.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
#sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
#Apply sysctl params without reboot
sudo sysctl --system
Ensure that the br_netfilter, overlay modules are loaded
lsmod | grep br_netfilter
lsmod | grep overlay
Ensure that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in sysctl config
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
Installing Container Runtime(containerd)
I chose containerd as my container runtime but you can choose a different one if you want from here
Set up Docker’s apt repository
#Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
#Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
Install containerd
sudo apt-get install containerd.io
cgroup Drivers
There are two cgroup drivers available
- cgroupfs
- systemd
Identify your system’s cgroup driver with this command
ps -p 1
if the output is systemd
you have to configure systemd cgroup driver with containerd, but if its not its ok because kubelet uses cgroupfs as default.
Identify the cgroup version
stat -fc %T /sys/fs/cgroup/
For cgroup v2, the output is cgroup2fs
For cgroup v1, the output is tmpfs
If you use cgroup v2 you’ve to use systemd
cgroup driver
Configuring the systemd
cgroup driver with containerd
edit /etc/containerd/config.toml
and delete all lines and add the following
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Save it and restart containerd
sudo systemctl restart containerd
Installing kubeadm, kubelet and kubectl
These instructions are for Kubernetes 1.29
Set up kubernetes apt repo
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Download the public signing key for the Kubernetes package repositories
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Update the apt package index, install kubelet, kubeadm and kubectl
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Initializing your control-plane node
sudo kubeadm init --pod-network-cidr= --apiserver-advertise-address=
--apiserver-advertise-address
is the ip address that’s going to be assigned to your control-plane node’s API server and --pod-network-cidr
is set mostly 10.244.0.0/16
because it’ll be easy when installing pod network add-ons later.
Then if the installation is successful you should see something like below
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
To make kubectl work for regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Installing pod network add-on(flannel)
Iam choosing flannel
as my pod network add-on as it doesn’t require any extra configurations, you can choose different one from here.
You can simply install it using kubectl apply
command below
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
If you are setting up a single machine Kubernetes cluster enter the below command to allow your cluster to schedule pods on control-plane node because by default it won’t
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
But if you’re setting up a multi node cluster you’ve to ssh into your worker node and enter kubeadm join
command you got with the successful installation message on control-plane node.
And if its successful you should see something like this:
[preflight] Running pre-flight checks
… (log output of join workflow) …
Node join complete:
Certificate signing request sent to control-plane and response
received.Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on control-plane to see this machine join.
Then you can check on control-plane node with kubectl get nodes
command. Ensure all of your pods are running with kubectl get pods -a
command. And voila you now have a working kubenetes cluster.
The above shown steps are to create a kubernetes cluster with simple configurations, feel free to visit kubernetes official documentation page for more advanced configurations.
Troubleshooting
kube-system pods keeps crashing/sandbox changed error
After running kubeadm init
command i saw something like:
So i tried changing the image version and set systemdCgroup to true(we did earlier)
First reset containerd to default configurations
#run as root
containerd config default > /etc/containerd/config.toml
Then locate sandbox_image and SystemdCgroup and update values as below
sandbox_image = "registry.k8s.io/pause:3.9"
SystemdCgroup = true
Then restart containerd for the changes to take effect
sudo systemctl restart containerd.service
And your pods should be running, if not try rebooting computer and that should solve it.
Click here for more info on this error.
We value your input. Share your thoughts or ask questions by leaving a comment.