Skip to content

Kubernetes upgrade

Sam Leeflang edited this page Apr 23, 2024 · 1 revision

Kubernetes upgrades can be done through Terraform. The number can be added, for example cluster_version 1.29 -> 1. 30. However, we have hit an issue with the EBS controllers. For some reason, it destroys the EBS controller role and policy and recreate it later. However, as at the moment the cluster updates the role/policy are missing, the update will fail. To circumvent this issue, we can manually create the role and policy. We can use the role/policy for another environment as blueprint. So first trigger terraform with an apply and yes. Then for test, it will remove the dissco-k8s-test-ebs-csi-controller-sa-irsa and dissco-k8s-test-aws-ebs-csi-driver-irsa. We will then recreate the policy and role based on for example the once for Acceptance. The policy can be taken over precisely the same, only change the name to the test name. For the role we need to change the oidc-provider, can be taken over from another test role. Once these are created, the upgrade of Kubernetes will succeed, but Terraform will fail as it tries to recreate the roles. When Terraform as had a full run and Kubernetes has been upgraded, you can remove the manually created role and policy and trigger Terraform again. Terraform will now create the EBS controller role and policy, and you are done. Congrats with the upgrade!

Clone this wiki locally