- Ansible installed on your local machine.
- SSH access to the target machines where you want to install k3s.
- Copy private and public key
cp /path/to/my/key/ansible ~/.ssh/ansible
cp /path/to/my/key/ansible.pub ~/.ssh/ansible.pub
- Change permissions on the files
sudo chmod 600 ~/.ssh/ansible
sudo chmod 600 ~/.ssh/ansible.pub
- Make ssh agent to actually use copied key
ssh-add ~/.ssh/ansible
-
Create VM from template and bootup.
-
Login to the VM and set hostname.
hostnamectl set-hostname <hostname>
-
Reboot.
-
Set static ip for the VM in the DHCP server.
-
Reboot.
-
Set this hostname in the ansible inventory hosts.ini file.
- Clone this repository to your local machine:
git clone https://github.com/x-real-ip/infrastructure.git
- Modify the inventory file inventory.ini to specify the target machines where you want to install k3s. Ensure you have appropriate SSH access and privileges.
- Run one of the following Ansible playbooks to initialize the k3s master and optional worker nodes:
- Bare Installation:
Execute the k3s_install_cluster_bare.yaml playbook to install a clean cluster without additional deployments:
sudo ansible-playbook --ask-vault-pass -kK k3s_install_cluster_bare.yaml
- Minimal Installation with Additional Deployments:
Execute the k3s_install_cluster_minimal.yaml playbook to install the bare minimum plus additional deployments:
sudo ansible-playbook --ask-vault-pass -kK k3s_install_cluster_minimal.yaml
- Bare Installation:
Execute the k3s_install_cluster_bare.yaml playbook to install a clean cluster without additional deployments:
Flush dns cache, needed when the same hostnames are used in previous machines (optional).
resolvectl flush-caches
Install kubectl on your local machine. Read the following page to know how to install kubectl on Linux.
Install Kubeseal locally to use Bitnami sealed serets in k8s.
sudo wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.19.1/kubeseal-0.19.1-linux-amd64.tar.gz
sudo tar xzvf kubeseal-0.19.1-linux-amd64.tar.gz
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
Drain and terminate all pods gracefully on the node while marking the node as unschedulable
kubectl drain --ignore-daemonsets --delete-emptydir-data <nodename>
Make the node unschedulable
kubectl cordon <nodename>
Make the node schedulable
kubectl uncordon <nodename>
Convert to BASE64
echo -n '<value>' | base64
Decode a secret with config file data
kubectl get secret <secret_name> -o jsonpath='{.data}' -n <namespace>
Create secret from file
kubectl create secret generic <secret name> --from-file=<secret filelocation> --dry-run=client --output=yaml > secrets.yaml
Restart Pod
kubectl rollout restart deployment <deployment name> -n <namespace>
Change PV reclaim policy
kubectl patch pv <pv-name> -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}"
Shell into pod
kubectl exec -it <pod_name> -- /bin/bash
Copy to or from pod
kubectl cp <namespace>/<pod>:/tmp/foo /tmp/bar
Reuse PV in PVC
- Remove the claimRef in the PV this will set the PV status from
Released
toAvailable
kubectl patch pv <pv_name> -p '{"spec":{"claimRef": null}}'
- Add
volumeName
in PVC
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-iscsi-home-assistant-data
namespace: home-automation
labels:
app: home-assistant
spec:
storageClassName: freenas-iscsi-csi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: pvc-iscsi-home-assistant-data
Raw mode
echo -n foo | kubeseal --cert "./sealed-secret-tls-2.crt" --raw --scope cluster-wide
Create TLS (unencrypted) secret
kubectl create secret tls cloudflare-tls --key origin-ca.pk --cert origin-ca.crt --dry-run=client -o yaml > cloudflare-tls.yaml
Encrypt secret with custom public certificate.
kubeseal --cert "./sealed-secret-tls-2.crt" --format=yaml < secret.yaml > sealed-secret.yaml
Add sealed secret to configfile secret
echo -n <mypassword_value> | kubectl create secret generic <secretname> --dry-run=client --from-file=<password_key>=/dev/stdin -o json | kubeseal --cert ./sealed-secret-tls-2.crt -o yaml \
-n democratic-csi --merge-into <secret>.yaml
Raw sealed secret
strict
scope (default):
echo -n foo | kubeseal --raw --from-file=/dev/stdin --namespace bar --name mysecret
AgBChHUWLMx...
namespace-wide
scope:
echo -n foo | kubeseal --cert ./sealed-secret-tls-2.crt --raw --from-file=/dev/stdin --namespace bar --scope namespace-wide
AgAbbFNkM54...
cluster-wide
scope:
echo -n foo | kubeseal --cert ./sealed-secret-tls-2.crt --raw --from-file=/dev/stdin --scope cluster-wide
AgAjLKpIYV+...
Include the sealedsecrets.bitnami.com/namespace-wide
annotation in the SealedSecret
metadata:
annotations:
sealedsecrets.bitnami.com/namespace-wide: "true"
Include the sealedsecrets.bitnami.com/cluster-wide
annotation in the SealedSecret
metadata:
annotations:
sealedsecrets.bitnami.com/cluster-wide: "true"
Show node lables
kubectl get no -o json | jq .items[].metadata.labels
rsync exact copy
sudo rsync -axHAWXS --numeric-ids --info=progress2 /mnt/sourcePart/ /mnt/destPart
Discovering targets in iSCSI server
sudo iscsiadm --mode discovery -t sendtargets --portal storage-server-lagg.lan.stamx.nl
Mount disk
sudo iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:<disk-name> --portal storage-server-lagg.lan.stamx.nl --login
Unmount disk
sudo iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:<disk-name> --portal storage-server-lagg.lan.stamx.nl -u
- Make sure that the container that uses the volume has stopped.
- SSH into one of the nodes in the cluster and start discovery
sudo iscsiadm -m discovery -t st -p truenas-master.lan.stamx.nl && \
read -p "Enter the disk name: " DISKNAME && \
export DISKNAME
- Login to target
sudo iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:${DISKNAME} --portal truenas-master.lan.stamx.nl --login && \
sleep 5 && \
lsblk && \
read -p "Enter the device ('sda' for example): " DEVICENAME && \
export DEVICENAME
- Create a local mount point & mount to replay logfile
sudo mkdir -vp /mnt/data-0 && sudo mount /dev/${DEVICENAME} /mnt/data-0/
- Unmount the device
sudo umount /mnt/data-0/
- Run check / ncheck
sudo xfs_repair -n /dev/${DEVICENAME}; sudo xfs_ncheck /dev/${DEVICENAME}
echo "If filesystem corruption was corrected due to replay of the logfile, the xfs_ncheck should produce a list of nodes and pathnames, instead of the errorlog."
- If needed run xfs repair
sudo xfs_repair /dev/${DEVICENAME}
- Logout from target
sudo iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:${DISKNAME} --portal storage-server-lagg.lan.stamx.nl --logout
echo "Volumes are now ready to be mounted as PVCs."
zfs rename r01_1tb/k8s/{current zvol name} r01_1tb/k8s/{new zvol name}
sudo dnf install cloud-utils-growpart
sudo growpart /dev/sda 2
sudo pvresize /dev/sda2
sudo lvextend -l +100%FREE /dev/mapper/rl-root
sudo xfs_growfs /
- Download the minimal .xz-compressed image file from https://fi.mirror.armbian.de/archive/odroidc4/archive/
- Write the .xz compressed image with a tool USBImager or balenaEtcher
- Insert the SD/MMC and boot
- Login via SSH user
root
default password1234
- Run Ansible playbook for Odroid