Skip to content

Latest commit

 

History

History
358 lines (242 loc) · 9.34 KB

README.md

File metadata and controls

358 lines (242 loc) · 9.34 KB

Kubernetes on Raspberry Pi installation book

Inspired by https://v1-28.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Prerequisites

  • Get 3 Raspberry PI with at least 4Go RAM
  • Install Ubuntu Server 24.04 (noble) using RPI PI Imager (ssh access allowed)

Knowledge

  • Basic linux command (apt, cat, wget, sed, grep and so on...)
  • Connect to a host via SSH

Methodology

Common manipulations

  1. Connect to host by SSH using user credentials
  2. Get root privileges
sudo su -
  1. Get eeprom configuration (optionnal)
rpi-eeprom-update
  1. Update packages
apt update && apt upgrade -y
  1. Install packages
apt-get -y install socat conntrack ipset linux-raspi

Prepare host for Kubernetes hosting

  1. Update CGROUP MEMORY DRIVER in firmware booting files
  • Read file's content located at /boot/firmware/cmdline.txt using following command :

    cat /boot/firmware/cmdline.txt
  1. (If necessary), add following parameters at the end of the file using the following command :
sed -i '$ s/$/ logo.nologo systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all cgroup_enable=cpu cgroup_enable=cpuset cgroup_cpu=1 cgroup_cpuset=1 cgroup_enable=memory cgroup_memory=1 loglevel=3 vt.global_cursor_default=0/' /boot/firmware/cmdline.txt
  1. Reboot system to applyu change on firmware booting file
systemctl reboot
  1. Update iptables config by typing following command :
modprobe overlay && modprobe br_netfilter
  1. Verify that the previous command has been correctly executed by using theses commands :
lsmod | grep overlay
lsmod | grep br_netfilter
  1. Add custom iptables configuration
cat <<EOF | tee /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
EOF
  1. Reload system configuration
service procps force-reload
  1. Verify that net.bridge.bridge-nf-call-iptables and net.ipv4.ip_forward are correctly applied by checking system configuration :
sysctl --system

Install Containerd

Doc containerd : https://github.com/containerd/containerd/blob/main/docs/getting-started.md

  1. Install containerd and runc using apt package manager
apt install containerd -y
  1. Configure containerd to default configuration
mkdir -p /etc/containerd/
touch /etc/containerd/config.toml
containerd config default > /etc/containerd/config.toml
  1. Update default configuration to enable cgroup drive

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true # <-- Line to modify

N.B :

If the cgroup driver is incorrect, that might lead to the pod in that node always being in a CrashLoopBackOff.

Source : https://stackoverflow.com/questions/75935431/kube-proxy-and-kube-flannel-crashloopbackoff

  1. Restart containerd service
systemctl restart containerd
  1. Post-install verification
ctr -v

Install kubeadm, kubectl and kubelet

Source : https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/

  1. Install dependencies packages of kubeadm, kubectl and kubelet.
apt-get install -y apt-transport-https ca-certificates curl gpg
  1. Download the public signing key for the k8s repository
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  1. Add k8s repository to apt sources repositories
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
  1. Update and ugprade package list
apt-get update && apt upgrade -y
  1. Install kubelet, kubeadm and kubectl packages
apt-get install -y kubelet kubeadm kubectl
  1. Hold kubelet, kubeadm and kubectl version to the current one to avoid major version upgrade and cause cluster crash
apt-mark hold kubelet kubeadm kubectl
Package name Description
kubeadm Installs the /usr/bin/kubeadm CLI tool and the kubelet drop-in file for the kubelet.
kubelet Installs the /usr/bin/kubelet binary.
kubectl Installs the /usr/bin/kubectl binary.
cri-tools Installs the /usr/bin/crictl binary from the cri-tools git repository.
kubernetes-cni Installs the /opt/cni/bin binaries from the plugins git repository.

Bootstrap cluster control plane (flannel option)

  1. Init K8S Control Plane
kubeadm init --pod-network-cidr=10.244.0.0/16 --skip-phases=addon/kube-proxy --v=5

All pods must be at running status except coredns pods

Source : K8S Troubleshooting Documentation

  1. Download Flannel YAML configuration file
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
  1. Update flannel configuration by adding following conf to DaemonSet in kube-flannel.yml :
- name: KUBERNETES_SERVICE_HOST
  value: '10.0.0.10'
- name: KUBERNETES_SERVICE_PORT
  value: '6443'
  1. Install Flannel CNI
kubectl apply -f kube-flannel.yml
  1. Activation addon kube-proxy
kubeadm init phase addon kube-proxy

Add worker nodes

  1. Run the following command :
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
  • To get token :
kubeadm token list

or if the token has expired

kubeadm token create
  • To get hash :
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

Clean up

Remove node

kubeadm reset && rm -r $HOME/.kube && iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

CTR containers clean up

ctr -n k8s.io c rm <containers_ids>
ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep etcd) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep sha) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep core) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep kube) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep pause) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep flannel)

Appendix

Kubernetes Documentation on Shell Autocompletion

CNI Configuration Tool

CNI Tool Github Repo CNI Tool Website

Controlling your cluster from machines other than the control-plane node

kubectl --kubeconfig ./admin.conf get nodes

Proxying API Server to localhost

kubectl --kubeconfig ./admin.conf proxy

Useful commands

Logs de tous les pods core-dns

for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name);  do kubectl logs --namespace=kube-system $p; done

Add "worker" role to a newly joined worker node

kubectl label node <name_of_node> node-role.kubernetes.io/worker=worker

Common repositories to know

Name Path
Kubernetes Default Manifest Path /etc/kubernetes/

Troubleshooting

DNS debugging resolution

https://v1-28.docs.kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

kubectl get svc --namespace=kube-system
kubectl get endpoints kube-dns --namespace=kube-system -o wide
kubectl get endpointslices.discovery.k8s.io
kubectl get svc -o yaml

CoreDNS blocked at Running but are not ready

Add Iptables rule to allow connection on 10.0.0.0/8 IP

iptables -A INPUT -p tcp -m tcp --dport 6443 -s 10.0.0.0/8 -m state --state NEW -j ACCEPT

Source : https://forum.linuxfoundation.org/discussion/863356/coredns-pods-are-running-but-not-in-ready-state