Inspired by https://v1-28.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
- Get 3 Raspberry PI with at least 4Go RAM
- Install Ubuntu Server 24.04 (noble) using RPI PI Imager (ssh access allowed)
- Basic linux command (apt, cat, wget, sed, grep and so on...)
- Connect to a host via SSH
- Connect to host by SSH using user credentials
- Get root privileges
sudo su -
- Get eeprom configuration (optionnal)
rpi-eeprom-update
- Update packages
apt update && apt upgrade -y
- Install packages
apt-get -y install socat conntrack ipset linux-raspi
- socat : Socat Package Page
- conntrack : Conntrack Package Page
- ipset : IPSet Package Page
- linux-raspi : Socat Package Page
- Update CGROUP MEMORY DRIVER in firmware booting files
-
Read file's content located at /boot/firmware/cmdline.txt using following command :
cat /boot/firmware/cmdline.txt
- (If necessary), add following parameters at the end of the file using the following command :
sed -i '$ s/$/ logo.nologo systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all cgroup_enable=cpu cgroup_enable=cpuset cgroup_cpu=1 cgroup_cpuset=1 cgroup_enable=memory cgroup_memory=1 loglevel=3 vt.global_cursor_default=0/' /boot/firmware/cmdline.txt
- Reboot system to applyu change on firmware booting file
systemctl reboot
- Update iptables config by typing following command :
modprobe overlay && modprobe br_netfilter
- Verify that the previous command has been correctly executed by using theses commands :
lsmod | grep overlay
lsmod | grep br_netfilter
- Add custom iptables configuration
cat <<EOF | tee /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
- Reload system configuration
service procps force-reload
- Verify that
net.bridge.bridge-nf-call-iptables
andnet.ipv4.ip_forward
are correctly applied by checking system configuration :
sysctl --system
Doc containerd : https://github.com/containerd/containerd/blob/main/docs/getting-started.md
- Install containerd and runc using apt package manager
apt install containerd -y
- Configure containerd to default configuration
mkdir -p /etc/containerd/
touch /etc/containerd/config.toml
containerd config default > /etc/containerd/config.toml
- Update default configuration to enable cgroup drive
To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true # <-- Line to modify
N.B :
If the cgroup driver is incorrect, that might lead to the pod in that node always being in a CrashLoopBackOff.
Source : https://stackoverflow.com/questions/75935431/kube-proxy-and-kube-flannel-crashloopbackoff
- Restart containerd service
systemctl restart containerd
- Post-install verification
ctr -v
Source : https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/
- Install dependencies packages of kubeadm, kubectl and kubelet.
apt-get install -y apt-transport-https ca-certificates curl gpg
- apt-transport-https package documentation
- ca-certificates package documentation
- curl package documentation
- gpg package documentation
- Download the public signing key for the k8s repository
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
- Add k8s repository to apt sources repositories
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
- Update and ugprade package list
apt-get update && apt upgrade -y
- Install kubelet, kubeadm and kubectl packages
apt-get install -y kubelet kubeadm kubectl
- Hold kubelet, kubeadm and kubectl version to the current one to avoid major version upgrade and cause cluster crash
apt-mark hold kubelet kubeadm kubectl
Package name | Description |
---|---|
kubeadm | Installs the /usr/bin/kubeadm CLI tool and the kubelet drop-in file for the kubelet. |
kubelet | Installs the /usr/bin/kubelet binary. |
kubectl | Installs the /usr/bin/kubectl binary. |
cri-tools | Installs the /usr/bin/crictl binary from the cri-tools git repository. |
kubernetes-cni | Installs the /opt/cni/bin binaries from the plugins git repository. |
- Init K8S Control Plane
kubeadm init --pod-network-cidr=10.244.0.0/16 --skip-phases=addon/kube-proxy --v=5
All pods must be at running status except coredns pods
Source : K8S Troubleshooting Documentation
- Download Flannel YAML configuration file
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
- Update flannel configuration by adding following conf to DaemonSet in kube-flannel.yml :
- name: KUBERNETES_SERVICE_HOST
value: '10.0.0.10'
- name: KUBERNETES_SERVICE_PORT
value: '6443'
- Install Flannel CNI
kubectl apply -f kube-flannel.yml
- Activation addon kube-proxy
kubeadm init phase addon kube-proxy
- Run the following command :
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
- To get token :
kubeadm token list
or if the token has expired
kubeadm token create
- To get hash :
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
kubeadm reset && rm -r $HOME/.kube && iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
ctr -n k8s.io c rm <containers_ids>
ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep etcd) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep sha) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep core) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep kube) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep pause) && ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q | grep flannel)
Kubernetes Documentation on Shell Autocompletion
CNI Tool Github Repo CNI Tool Website
kubectl --kubeconfig ./admin.conf get nodes
kubectl --kubeconfig ./admin.conf proxy
Logs de tous les pods core-dns
for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done
Add "worker" role to a newly joined worker node
kubectl label node <name_of_node> node-role.kubernetes.io/worker=worker
Name | Path |
---|---|
Kubernetes Default Manifest Path | /etc/kubernetes/ |
https://v1-28.docs.kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
kubectl get svc --namespace=kube-system
kubectl get endpoints kube-dns --namespace=kube-system -o wide
kubectl get endpointslices.discovery.k8s.io
kubectl get svc -o yaml
Add Iptables rule to allow connection on 10.0.0.0/8 IP
iptables -A INPUT -p tcp -m tcp --dport 6443 -s 10.0.0.0/8 -m state --state NEW -j ACCEPT
Source : https://forum.linuxfoundation.org/discussion/863356/coredns-pods-are-running-but-not-in-ready-state