-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug] Upgrades not working as expected #932
Comments
Confirmed with the k0smotron team that there is a related issue regarding status reporting: k0sproject/k0smotron#911 While trying to reproduce the issue on the AWS provider I discovered that the capi controller is failing on:
We've recently faced similar preflight check errors on EKS deployments #907 and there is a workaround: add After I applied the w/a, the upgrade proceeded and was completed successfully. I'll check if this workaround applies to other providers and if we can fix it in the cluster templates. |
Describe the bug
After initiating the upgrade, only one node is updated. Expected outcome is all nodes are upgraded.
To Reproduce
Setup and run the demos from https://github.com/k0rdent/demos
Run steps for demo 1, and then steps for demo 2.
Output shows only one node upgraded
KUBECONFIG="kubeconfigs/k0rdent-aws-test1.kubeconfig" PATH=$PATH:./bin kubectl get node -w
NAME STATUS ROLES AGE VERSION
k0rdent-aws-test1-cp-0 Ready control-plane 13h v1.31.3+k0s
k0rdent-aws-test1-md-vrsp4-twb48 Ready 13h v1.31.2+k0s
k0rdent-aws-test1-md-vrsp4-z76tx Ready 13h v1.31.2+k0s
Expected behavior
I would expect all nodes to be running v1.31.3+k0s
Additional context
Seen across all providers, e.g. AWS, OpenStack, so must be an issue somewhere else, maybe k0smotron?
The text was updated successfully, but these errors were encountered: