v1.4.0-alpha.0 - 2021-11-29
- [BREAKING] GCP: Default operating system for control plane instances is now Ubuntu 20.04 (#1576)
- Make sure to bind
control_plane_image_family
to the image you're currently using or Terraform might recreate all your control plane instances
- Make sure to bind
- [BREAKING] Azure: Default VM type is changed to
Standard_F2
(#1528)- Make sure to bind
control_plane_vm_size
andworker_vm_size
to the VM size you're currently using or Terraform might recreate all your instances
- Make sure to bind
- Add CCM/CSI migration support for clusters with the static worker nodes (#1544)
- Add CCM/CSI migration support for the Azure clusters (#1610)
- Automatically create cloud-config Secret for all providers if external cloud controller manager (
.cloudProvider.external
) is enabled (#1575) - Add support for Cilium CNI (#1560, #1629)
- Add support for additional Subject Alternative Names (SANs) for the Kubernetes API server (#1599, #1603, #1606)
- Add a new
MachineAnnotations
field in the API used to define annotations inMachineDeployment.Spec.Template.Spec.Annotations
(#1601) - Add a new
--create-machine-deployments
flag to thekubeone apply
command used to control should KubeOne create initial MachineDeployment objects when provisioning the cluster (default istrue
) (#1617)
- Integrate the AWS CCM addon with KubeOne (#1585)
- The AWS CCM is now deployed if the external cloud provider (
.cloudProvider.external
) is enabled - This option cannot be enabled for existing AWS clusters running in-tree cloud provider, instead, those clusters must go through the CCM/CSI migration process
- The AWS CCM is now deployed if the external cloud provider (
- Add the AWS EBS CSI driver addon (#1597)
- Automatically deploy the AWS EBS CSI driver addon if external cloud controller manager (
.cloudProvider.external
) is enabled - Add default StorageClass for AWS EBS CSI driver to the
default-storage-class
embedded addon
- Automatically deploy the AWS EBS CSI driver addon if external cloud controller manager (
- Integrate the Azure CCM addon with KubeOne (#1561, #1579)
- The Azure CCM is now deployed if the external cloud provider (
.cloudProvider.external
) is enabled - This option cannot be enabled for existing Azure clusters running in-tree cloud provider, instead, those clusters must go through the CCM/CSI migration process
- The Azure CCM is now deployed if the external cloud provider (
- Add the AzureFile CSI driver addon (#1575, #1579)
- Automatically deploy the AzureFile CSI driver addon if external cloud controller manager (
.cloudProvider.external
) is enabled - Add default StorageClass for AzureFile CSI driver to the
default-storage-class
embedded addon
- Automatically deploy the AzureFile CSI driver addon if external cloud controller manager (
- Add the AzureDisk CSI driver addon (#1577)
- Automatically deploy the AzureDisk CSI driver addon if external cloud controller manager (
.cloudProvider.external
) is enabled - Add default StorageClass for AzureDisk CSI driver to the
default-storage-class
embedded addon
- Automatically deploy the AzureDisk CSI driver addon if external cloud controller manager (
- Add a deprecation warning for PodSecurityPolicies (#1595)
- Validate the cluster name to ensure it's a correct DNS subdomain (RFC 1123) (#1641, #1646, #1648)
- Create MachineDeployments only for newly-provisioned clusters (#1627)
- Show warning about LBs on CCM migration for OpenStack clusters (#1627)
- Change default Kubernetes version in the example configuration to v1.22.3 (#1605)
- Force drain nodes to remove standalone pods (#1627)
- Check for minor version when choosing kubeadm API version (#1627)
- Provide
--cluster-name
flag to the OpenStack external CCM (read PR description for more details) (#1619) - Enable ip_tables related kernel modules and disable
nm-cloud-setup
tool on AWS for RHEL machines (#1607) - Properly pass machine-controllers args (#1594)
- This fixes the issue causing machine-controller and machine-controller-webhook deployments to run with incorrect flags
- If you created your cluster with KubeOne 1.2 or older, and already upgraded to KubeOne 1.3, we recommend running kubeone apply again with KubeOne 1.3.2 or newer to properly reconcile machine-controller deployments
- Fix
yum versionlock delete containerd.io
error (#1600) - Ensure containerd/docker be upgraded automatically when running kubeone apply (#1589)
- Edit SELinux config file only if file exists (#1532)
- Add new "required" addons template function (#1618)
- Replace critical-pod annotation with priorityClassName (#1627)
- Default image in the cluster-autoscaler addon and allow the image to be overridden using addon parameters (#1552)
- Minor improvements to OpenStack CCM and CSI addons. OpenStack CSI controller can now be scheduled on control plane nodes (#1531)
- Deploy default StorageClass for GCP clusters if the
default-storage-class
addon is enabled (#1638)
- [BREAKING] GCP: Default operating system for control plane instances is now Ubuntu 20.04 (#1576)
- Make sure to bind
control_plane_image_family
to the image you're currently using or Terraform might recreate all your control plane instances
- Make sure to bind
- [BREAKING] Azure: Default VM type is changed to
Standard_F2
(#1528)- Make sure to bind
control_plane_vm_size
andworker_vm_size
to the VM size you're currently using or Terraform might recreate all your instances
- Make sure to bind
- OpenStack: Open NodePorts by default (#1530)
- AWS: Open NodePorts by default (#1535)
- GCE: Open NodePorts by default (#1529)
- Hetzner: Create Firewall by default (#1533)
- Azure: Open NodePorts by default (#1528)
- Fix keepalived script in Terraform configs for vSphere to assume yes when updating repos (#1537)
- Add additional Availability Set used for worker nodes to Terraform configs for Azure (#1556)
- Make sure to check the production recommendations for Azure clusters for more information about how this additional availability set is used
- Update machine-controller to v1.37.0 (#1647)
- machine-controller is now using Ubuntu 20.04 instead of 18.04 by default for all newly-created Machines on AWS, Azure, DO, GCE, Hetzner, Openstack and Equinix Metal
- Update Hetzner Cloud Controller Manager to v1.12.0 (#1583)
- Update Go to 1.17.1 (#1534, #1541, #1542, #1545)
- Remove the PodPresets feature (#1593)
- If you're still using this feature, make sure to migrate away before upgrading to this KubeOne release
- Remove Ansible examples (#1633)
v1.3.2 - 2021-11-18
- Create MachineDeployments only for newly-provisioned clusters (#1628)
- Show warning about LBs on CCM migration for OpenStack clusters (#1628)
- Force drain nodes to remove standalone pods (#1628)
- Check for minor version when choosing kubeadm API version (#1628)
- Provide --cluster-name flag to the OpenStack external CCM (read PR description for more details) (#1632)
- Enable ip_tables related kernel modules and disable nm-cloud-setup tool on AWS for RHEL machines (#1616)
- Properly pass machine-controllers args (#1596)
- This fixes the issue causing machine-controller and machine-controller-webhook deployments to run with incorrect flags
- If you created your cluster with KubeOne 1.2 or older, and already upgraded to KubeOne 1.3, we recommend running kubeone apply again to properly reconcile machine-controller deployments
- Edit SELinux config file only if file exists (#1592)
- Fix
yum versionlock delete containerd.io
error (#1602) - Ensure containerd/docker be upgraded automatically when running kubeone apply (#1590)
- Add new "required" addons template function (#1624)
- Replace critical-pod annotation with priorityClassName (#1628)
- Update Hetzner Cloud Controller Manager to v1.12.0 (#1592)
- Default image in the cluster-autoscaler addon and allow the image to be overridden using addon parameters (#1553)
- Minor improvements to OpenStack CCM and CSI addons. OpenStack CSI controller can now be scheduled on control plane nodes (#1536)
- OpenStack: Open NodePorts by default (#1592)
- GCE: Open NodePorts by default (#1592)
- Azure: Open NodePorts by default (#1592)
- Azure: Default VM type is changed to Standard_F2 (#1592)
- Add additional Availability Set used for worker nodes to Terraform configs for Azure (#1562)
- Make sure to check the production recommendations for Azure clusters for more information about how this additional availability set is used
- Fix keepalived script in Terraform configs for vSphere to assume yes when updating repos (#1538)
- Remove Ansible examples (#1634)
v1.3.1 - unreleased
The v1.3.1 release has never been released due to an issue with the release process. Please check the v1.3.2 release instead.
v1.3.0 - 2021-09-15
Check out the Upgrading from 1.2 to 1.3 tutorial for more details about the breaking changes and how to mitigate them.
- Increase the minimum Kubernetes version to v1.19.0. If you have Kubernetes clusters running v1.18 or older, you need to use an older KubeOne release to upgrade them to v1.19, and then upgrade to KubeOne 1.3.
- Increase the minimum Terraform version to 1.0.0.
- Remove support for Debian and RHEL 7 clusters. If you have Debian clusters, we recommend migrating to another operating system, for example Ubuntu. If you have RHEL 7 clusters, you should consider migrating to RHEL 8 which is supported.
- Automatically deploy CSI plugins for Hetzner, OpenStack, and vSphere clusters using external cloud provider. If you already have the CSI plugin deployed, you need to make sure that your CSI plugin deployment is compatible with the KubeOne CSI plugin addon.
- The
kubeone reset
command requires an explicit confirmation like theapply
command starting with this release. The command can be automatically approved by using the--auto-approve
flag.
- KubeOne Addons can now be organized into subdirectories. It currently remains possible to put addons in the root of the addons directory, however, this is option is considered as deprecated as of this release. We highly recommend all users to reorganize their addons into subdirectories, where each subdirectory is for YAML manifests related to one addon.
- We're deprecating support for CentOS 8 because it's reaching End-of-Life (EOL) on December 31, 2021. CentOS 7 remains supported by KubeOne for now.
- It's currently not possible to provision or upgrade to Kubernetes 1.22 for clusters running on vSphere. This is because vSphere CCM and CSI don't support Kubernetes 1.22. We'll introduce Kubernetes 1.22 support for vSphere as soon as new CCM and CSI releases with support for Kubernetes 1.22 are out.
- Newly-provisioned Kubernetes 1.22 clusters or clusters upgraded from Kubernetes 1.21 to 1.22 using KubeOne 1.3.0-alpha.1 use a metrics-server version incompatible with Kubernetes 1.22. This might cause issues with deleting Namespaces that manifests by the Namespace being stuck in the Terminating state. This can be fixed by upgrading KubeOne to v1.3.0-rc.0 or newer and running
kubeone apply
. - The new Addons API requires the addons directory path (
.addons.path
) to be provided and the directory must exist (it can be empty), even if only embedded addons are used. If the path is not provided, it'll default to./addons
.
- Implement the Addons API used to manage addons deployed by KubeOne. The new Addons API can be used to deploy the addons embedded in the KubeOne binary. Currently available addons are:
backups-restic
,cluster-autoscaler
,default-storage-class
, andunattended-upgrades
(#1462, #1486)- More information about the new API can be found in the Addons documentation or by running
kubeone config print --full
.
- More information about the new API can be found in the Addons documentation or by running
- Add support for specifying a custom Root CA bundle (#1316)
- Add new kube-proxy configuration API (#1420)
- This API allows users to switch kube-proxy to IPVS mode, and configure IPVS properties such as strict ARP and scheduler
- The default kube-proxy mode remains iptables
- Docker to containerd automated migration (#1362)
- Check out the Migrating to containerd document for more details about this features, including how to use it.
- Add containerd support for Flatcar clusters (#1340)
- Add support for Kubernetes 1.22 (#1447, #1456)
- Add Cinder CSI plugin. The plugin is deployed by default for OpenStack clusters using the external cloud provider (#1465)
- Check out the Attention Needed section of the changelog for more information.
- Add vSphere CSI plugin. The CSI plugin is deployed automatically if
.cloudProvider.csiConfig
is provided and.cloudProvider.external
is enabled (#1484)- More information about the CSI plugin configuration can be found in the vSphere CSI docs
- Check out the Attention Needed section of the changelog for more information.
- Add Hetzner CSI plugin (#1418)
- Check out the Attention Needed section of the changelog for more information.
- Implement the CCM/CSI migration for OpenStack and vSphere (#1468, #1469, #1472, #1482, #1487, #1494)
- Check out the CCM/CSI migration document for more details about this features, including how to use it.
- Add support for Encryption Providers (#1241, #1320)
- Check out the Enabling Kubernetes Encryption Providers document for more details about this features, including how to use it.
- Add a new
kubeone config images list
subcommand to list images used by KubeOne and kubeadm (#1334) - Automatically renew Kubernetes certificates when running
kubeone apply
if they're supposed to expire in less than 90 days (#1300) - Add support for running Kubernetes clusters on Amazon Linux 2 (#1339)
- Use the kubeadm v1beta3 API for all Kubernetes 1.22+ clusters (#1457)
- Implement a mechanism for embedding YAML addons into KubeOne binary. The embedded addons can be enabled or overridden using the Addons API (#1387)
- Support organizing addons into subdirectories (#1364)
- Add a new optional embedded addon
default-storage-class
used to deploy default StorageClass for AWS, Azure, GCP, OpenStack, vSphere, or Hetzner clusters (#1488) - Add a new KubeOne addon for handling unattended upgrades of the operating system (#1291)
- Increase the minimum Kubernetes version to v1.19.0. If you have Kubernetes clusters running v1.18 or older, you need to use an older KubeOne release to upgrade them to v1.19, and then upgrade to KubeOne 1.3.
- The
kubeone reset
command requires an explicit confirmation like theapply
command starting with this release. The command can be automatically approved by using the--auto-approve
flag - Improve the
kubeone reset
output to include more information about the target cluster (#1474)
- Make
kubeone apply
skip already provisioned static worker nodes (#1485) - Extend restart API server script to handle failing
crictl logs
due to missing symlink. This fixes the issue withkubeone apply
failing to restart the API server containers when provisioning or upgrading the cluster (#1448) - Fix subsequent apply failures if CABundle is enabled (#1404)
- Fix NPE when migrating to containerd (#1499)
- Fix adding second container to the machine-controller-webhook Deployment (#1433)
- Fix missing ClusterRole rule for cluster-autoscaler (#1331)
- Fix missing confirmation for reset (#1251)
- Fix kubeone reset error when trying to list Machines (#1416)
- Ignore preexisting static manifests kubeadm preflight error (#1335)
- Upgrade Terraform to 1.0.0. The minimum Terraform version as of this KubeOne release is v1.0.0. (#1368, #1376)
- Use latest available (wildcard) docker and containerd version (#1358)
- Update machine-controller to v1.35.2 (#1489)
- Update metrics-server to v0.5.0. This fixes support for Kubernetes 1.22 clusters (#1483)
- The metrics-server now uses serving certificates signed by the Kubernetes CA instead of the self-signed certificates.
- OpenStack CCM version now depends on the Kubernetes version (#1465)
- vSphere CCM (CPI) version now depends on the Kubernetes version (#1489)
- Kubernetes 1.22+ clusters are currently unsupported on vSphere (see Known Issues for more details)
- Update Hetzner CCM to v1.9.1 (#1428)
- Add
HCLOUD_LOAD_BALANCERS_USE_PRIVATE_IP=true
to the environment if the network is configured
- Add
- Update Hetzner CSI driver to v1.6.0 (#1491)
- Update DigitalOcean CCM to v0.1.33 (#1429)
- Upgrade machine-controller addon apiextensions to v1 API (#1423)
- Update Go to 1.16.7 (#1441)
- Replace the Canal CNI Go template with an embedded addon (#1405)
- Replace the WeaveNet Go template with an embedded addon (#1407)
- Replace the NodeLocalDNS template with an addon (#1392)
- Replace the metrics-server CCM Go template with an embedded addon (#1411)
- Replace the machine-controller Go template with an embedded addon (#1412)
- Replace the DigitalOcean CCM Go template with an embedded addon (#1396)
- Replace the Hetzner CCM Go template with an embedded addon (#1397)
- Replace the Packet CCM Go template with an embedded addon (#1401)
- Replace the OpenStack CCM Go template with an embedded addon (#1402)
- Replace the vSphere CCM Go template with an embedded addon (#1410)
- Upgrade calico-vxlan CNI plugin addon to v3.19.1 (#1403)
- Inherit the firmware settings from the template VM in the Terraform configs for vSphere (#1445)
- Remove CSIMigration and CSIMigrationComplete fields from the API (#1473)
- Those two fields were non-functional since they were added, so this change shouldn't affect users.
- If you have any of those those two fields set in the KubeOneCluster manifest, make sure to remove them or otherwise the validation will fail.
- Remove CNI patching (#1386)
v1.3.0-rc.0 - 2021-09-06
- [BREAKING/ACTION REQUIRED] Increase the minimum Kubernetes version to v1.19.0 (#1466)
- If you have Kubernetes clusters running v1.18 or older, you need to use an older KubeOne release to upgrade them to v1.19, and then upgrade to KubeOne 1.3.
- Check out the Compatibility guide for more information about supported Kubernetes versions for each KubeOne release.
- [BREAKING/ACTION REQUIRED] Add support for CSI plugins for clusters using external cloud provider (i.e.
.cloudProvider.external
is enabled)- The Cinder CSI plugin is deployed by default for OpenStack clusters (#1465)
- The Hetzner CSI plugin is deployed by default for Hetzner clusters
- The vSphere CSI plugin is deployed by default if the CSI plugin configuration is provided via newly-added
cloudProvider.csiConfig
field- More information about the CSI plugin configuration can be found in the vSphere CSI docs: https://vsphere-csi-driver.sigs.k8s.io/driver-deployment/installation.html#create_csi_vsphereconf
- Note: the vSphere CSI plugin requires vSphere version 6.7U3.
- The default StorageClass is not deployed by default. It can be deployed via new Addons API by enabling the
default-storage-class
addon, or manually. - ACTION REQUIRED: If you already have the CSI plugin deployed, you need to make sure that your CSI plugin deployment is compatible with the KubeOne CSI plugin addon.
- You can find the CSI addons in the
addons
directory: https://github.com/kubermatic/kubeone/tree/master/addons - If your CSI plugin deployment is incompatible with the KubeOne CSI addon, you can resolve it in one of the following ways:
- Delete your CSI deployment and let KubeOne install the CSI driver for you. Note: you'll not be able to mount volumes until you don't install the CSI driver again.
- Override the appropriate CSI addon with your deployment manifest. With this way, KubeOne will install the CSI plugin using your manifests. To do this, you need to:
- Enable addons in the KubeOneCluster manifest (
.addons.enable
) and provide the path to addons directory (.addons.path
, for example:./addons
) - Create a subdirectory in the addons directory named same as the CSI addon used by KubeOne, for example
./addons/csi-openstack-cinder
or./addons/csi-vsphere
(see https://github.com/kubermatic/kubeone/tree/master/addons for addon names) - Put your CSI deployment manifests in the newly created subdirectory
- Enable addons in the KubeOneCluster manifest (
- You can find the CSI addons in the
- It's currently not possible to provision or upgrade to Kubernetes 1.22 for clusters running on vSphere. This is because vSphere CCM and CSI don't support Kubernetes 1.22. We'll introduce Kubernetes 1.22 support for vSphere as soon as new CCM and CSI releases with support for Kubernetes 1.22 are out.
- Clusters provisioned with Kubernetes 1.22 or upgraded from 1.21 to 1.22 using KubeOne 1.3.0-alpha.1 use a metrics-server version incompatible with Kubernetes 1.22. This might cause issues with deleting Namespaces that manifests by the Namespace being stuck in the Terminating state. This can be fixed by upgrading the metrics-server by running
kubeone apply
. - The new Addons API requires the addons directory path (
.addons.path
) to be provided and the directory must exist (it can be empty), even if only embedded addons are used. If the path is not provided, it'll default to./addons
.
- Implement the Addons API used to manage addons deployed by KubeOne (#1462, #1486)
- The new Addons API can be used to deploy the addons embedded in the KubeOne binary.
- Currently available addons are:
backups-restic
,default-storage-class
, andunattended-upgrades
. - More information about the new API can be found by running
kubeone config print --full
.
- [BREAKING/ACTION REQUIRED] Add support for the Cinder CSI plugin (#1465)
- The plugin is deployed by default for OpenStack clusters using the external cloud provider.
- Check out the Attention Needed section of the changelog for more information.
- Add support for the vSphere CSI plugin (#1484)
- Deploying the CSI plugin requires providing the CSI configuration using a newly added
.cloudProvider.csiConfig
field- More information about the CSI plugin configuration can be found in the vSphere CSI docs: https://vsphere-csi-driver.sigs.k8s.io/driver-deployment/installation.html#create_csi_vsphereconf
- The CSI plugin is deployed automatically if
.cloudProvider.csiConfig
is provided and.cloudProvider.external
is enabled - Check out the Attention Needed section of the changelog for more information.
- Deploying the CSI plugin requires providing the CSI configuration using a newly added
- Implement the CCM/CSI migration for OpenStack and vSphere (#1468, #1469, #1472, #1482, #1487, #1494)
- The CCM/CSI migration is used to migrate clusters running in-tree cloud provider (i.e. with
.cloudProvider.external
set tofalse
) to the external CCM (cloud-controller-manager) and CSI plugin. - The migration is implemented with the
kubeone migrate to-ccm-csi
command. - The CCM/CSI migration for vSphere is currently experimental and not tested.
- More information about how the CCM/CSI migration works can be found by running
kubeone migrate to-ccm-csi --help
.
- The CCM/CSI migration is used to migrate clusters running in-tree cloud provider (i.e. with
- Add a new optional embedded addon
default-storage-class
used to deploy default StorageClass for AWS, Azure, GCP, OpenStack, vSphere, or Hetzner clusters (#1488)
- [BREAKING/ACTION REQUIRED] Increase the minimum Kubernetes version to v1.19.0 (#1466)
- If you have Kubernetes clusters running v1.18 or older, you need to use an older KubeOne release to upgrade them to v1.19, and then upgrade to KubeOne 1.3.
- Check out the Compatibility guide for more information about supported Kubernetes versions for each KubeOne release.
- Improve the
kubeone reset
output to include more information about the target cluster (#1474)
- Make
kubeone apply
skip already provisioned static worker nodes (#1485) - Fix NPE when migrating to containerd (#1499)
- OpenStack CCM version now depends on the Kubernetes version (#1465)
- Kubernetes 1.19 clusters use OpenStack CCM v1.19.2
- Kubernetes 1.20 clusters use OpenStack CCM v1.20.2
- Kubernetes 1.21 clusters use OpenStack CCM v1.21.0
- Kubernetes 1.22+ clusters use OpenStack CCM v1.22.0
- vSphere CCM (CPI) version now depends on the Kubernetes version (#1489)
- Kubernetes 1.19 clusters use vSphere CPI v1.19.0
- Kubernetes 1.20 clusters use vSphere CPI v1.20.0
- Kubernetes 1.21 clusters use vSphere CPI v1.21.0
- Kubernetes 1.22+ clusters are currently unsupported on vSphere (see Known Issues for more details)
- Update metrics-server to v0.5.0 (#1483)
- This fixes support for Kubernetes 1.22 clusters.
- The metrics-server now uses serving certificates signed by the Kubernetes CA instead of the self-signed certificates.
- Update machine-controller to v1.35.2 (#1489)
- Update Hetzner CSI driver to v1.6.0 (#1491)
- Remove CSIMigration and CSIMigrationComplete fields from the API (#1473)
- Those two fields were non-functional since they were added, so this change shouldn't affect users.
- If you have any of those those two fields set in the KubeOneCluster manifest, make sure to remove them or otherwise the validation will fail.
v1.3.0-alpha.1 - 2021-08-18
- Clusters provisioned with Kubernetes 1.22 or upgraded from 1.21 to 1.22 using KubeOne 1.3.0-alpha.1 use a metrics-server version incompatible with Kubernetes 1.22. This might cause issues with deleting Namespaces that manifests by the Namespace being stuck in the Terminating state. This can be fixed by upgrading to KubeOne 1.3.0-rc.0 and running
kubeone apply
.
- Add support for Kubernetes 1.22 (#1447, #1456)
- Add support for the kubeadm v1beta3 API. The kubeadm v1beta3 API is used for all Kubernetes 1.22+ clusters. (#1457)
- Fix adding second container to the machine-controller-webhook Deployment (#1433)
- Extend restart API server script to handle failing
crictl logs
due to missing symlink. This fixes the issue withkubeone apply
failing to restart the API server containers when provisioning or upgrading the cluster (#1448)
- Update Go to 1.16.7 (#1441)
- Update machine-controller to v1.35.1 (#1440)
- Update Hetzner CCM to v1.9.1 (#1428)
- Add
HCLOUD_LOAD_BALANCERS_USE_PRIVATE_IP=true
to the environment if the network is configured
- Add
- Update DigitalOcean CCM to v0.1.33 (#1429)
- Inherit the firmware settings from the template VM in the Terraform configs for vSphere (#1445)
v1.3.0-alpha.0 - 2021-07-21
- [BREAKING/ACTION REQUIRED] The
kubeone reset
command requires an explicit confirmation like theapply
command starting with this release- Running the
reset
command requires typingyes
to confirm the intention to unprovision/reset the cluster - The command can be automatically approved by using the
--auto-approve
flag
- Running the
- [BREAKING/ACTION REQUIRED] Upgrade Terraform to 1.0.0. The minimum Terraform version as of this KubeOne release is v1.0.0. (#1368)
- [BREAKING/ACTION REQUIRED] Use AdmissionRegistration v1 API for machine-controller-webhook. The minimum supported Kubernetes version is now 1.16. (#1290)
- Since AdmissionRegistartion v1 got introduced in Kubernetes 1.16, the minimum Kubernetes version that can be managed by KubeOne is now 1.16. If you're running the Kubernetes clusters running 1.15 or older, please use the older release of KubeOne to upgrade those clusters
- KubeOne Addons can now be organized into subdirectories. It currently remains possible to put addons in the root of the addons directory, however, this is option is considered as deprecated as of this release. We highly recommend all users to reorganize their addons into subdirectories, where each subdirectory is for YAML manifests related to one addon.
- Add new kube-proxy configuration API (#1420)
- This API allows users to switch kube-proxy to IPVS mode, and configure IPVS properties such as strict ARP and scheduler
- The default kube-proxy mode remains iptables
- Add support for Encryption Providers (#1241, #1320)
- Add support for specifying a custom Root CA bundle (#1316)
- Docker to containerd automated migration (#1362)
- Automatically renew Kubernetes certificates when running
kubeone apply
if they're supposed to expire in less than 90 days (#1300) - Ignore preexisting static manifests kubeadm preflight error (#1335)
- Add a new
kubeone config images list
subcommand to list images used by KubeOne and kubeadm (#1334) - Add containerd support for Flatcar clusters (#1340)
- Add support for running Kubernetes clusters on Amazon Linux 2 (#1339)
- Implement a mechanism for embedding YAML addons into KubeOne binary (#1387)
- Support organizing addons into subdirectories (#1364)
- Add a new KubeOne addon for handling unattended upgrades of the operating system (#1291)
- Add a new KubeOne addon for deploying the Hetzner CSI plugin (#1418)
- [BREAKING/ACTION REQUIRED] The
kubeone reset
command requires an explicit confirmation like theapply
command starting with this release- Running the
reset
command requires typingyes
to confirm the intention to unprovision/reset the cluster - The command can be automatically approved by using the
--auto-approve
flag
- Running the
- Fix missing ClusterRole rule for cluster autoscaler (#1331)
- Fix missing confirmation for reset (#1251)
- Remove CNI patching (#1386)
- Fix subsequent apply failures if CABundle is enabled (#1404)
- Fix kubeone reset error when trying to list Machines (#1416)
- [BREAKING/ACTION REQUIRED] Upgrade Terraform to 1.0.0. The minimum Terraform version as of this KubeOne release is v1.0.0. (#1368, #1376)
- Use latest available (wildcard) docker and containerd version (#1358)
- Upgrade machinecontroller to v1.33.0 (#1391)
- Upgrade machine-controller addon apiextensions to v1 API (#1423)
- Upgrade calico-vxlan CNI plugin addon to v3.19.1 (#1403)
- Update Go to 1.16.1 (#1267)
- Replace the Canal CNI Go template with an embedded addon (#1405)
- Replace the WeaveNet Go template with an embedded addon (#1407)
- Replace the NodeLocalDNS template with an addon (#1392)
- Replace the metrics-server CCM Go template with an embedded addon (#1411)
- Replace the machine-controller Go template with an embedded addon (#1412)
- Replace the DigitalOcean CCM Go template with an embedded addon (#1396)
- Replace the Hetzner CCM Go template with an embedded addon (#1397)
- Replace the Packet CCM Go template with an embedded addon (#1401)
- Replace the OpenStack CCM Go template with an embedded addon (#1402)
- Replace the vSphere CCM Go template with an embedded addon (#1410)
v1.2.3 - 2021-06-14
- Pass the
-node-external-cloud-provider
flag to the machine-controller-webhook. This fixes the issue with the worker nodes not using the external CCM on the clusters with the external CCM enabled (#1380) - Disable
repo_gpgcheck
for the Kubernetes yum repository. This fixes the cluster provisioning and upgrading failures for CentOS/RHEL caused by yum failing to install Kubernetes packages (#1304)
v1.2.2 - 2021-06-11
- Fix AWS config for terraform 0.15 (#1372)
- AWS terraform config now works under terraform 0.15+ (including 1.0)
- Update machinecontroller to v1.30.0 (#1370)
- machinecontroller to v1.30.0 relaxes docker / containerd version constraints
- Relax docker/containerd version constraints (#1371)
v1.2.1 - 2021-03-23
Check out the changelog for the v1.2.1 release for more information about what changes were introduced in the 1.2 release.
- Install
cri-tools
(crictl
) on Amazon Linux 2. This fixes the issue with provisioning Kubernetes and Amazon EKS-D clusters on Amazon Linux 2 (#1282)
v1.2.0 - 2021-03-18
- [BREAKING/ACTION REQUIRED] Starting with the KubeOne 1.3 release, the
kubeone reset
command will require an explicit confirmation like theapply
command- Running the
reset
command will require typingyes
to confirm the intention to unprovision/reset the cluster - The command can be automatically approved by using the
--auto-approve
flag - The
--auto-approve
flag has been already implemented as a no-op flag in this release - Starting with this release, running
kubeone reset
will show a warning about this change each time thereset
command is used
- Running the
- [BREAKING/ACTION REQUIRED] Disallow and deprecate the PodPresets feature
- If you're upgrading a cluster that uses the PodPresets feature from Kubernetes 1.19 to 1.20, you have to disable the PodPresets feature in the KubeOne configuration manifest
- The PodPresets feature has been removed from Kubernetes 1.20 with no built-in replacement
- It's not possible to use the PodPresets feature starting with Kubernetes 1.20, however, it currently remains possible to use it for older Kubernetes versions
- The PodPresets feature will be removed from the KubeOneCluster API once Kubernetes 1.19 reaches End-of-Life (EOL)
- As an alternative to the PodPresets feature, Kubernetes recommends using the MutatingAdmissionWebhooks.
- [BREAKING/ACTION REQUIRED] Support for CoreOS has been removed from KubeOne and machine-controller
- CoreOS has reached End-of-Life on May 26, 2020
- As an alternative to CoreOS, KubeOne supports Flatcar Linux
- We recommend migrating your CoreOS clusters to the Flatcar Linux or other supported operating system
- [BREAKING/ACTION REQUIRED] Default values for OpenIDConnect has been corrected to match what's advised by the example configuration
- Previously, there were no default values for the OpenIDConnect fields
- This might only affect users using the OpenIDConnect feature
- Kubernetes has announced deprecation of the Docker (dockershim) support in
the Kubernetes 1.20 release. It's expected that Docker support will be
removed in Kubernetes 1.22
- All newly created clusters running Kubernetes 1.21+ will be provisioned with containerd instead of Docker
- Automated migration from Docker to containerd is currently not available, but is planned for one of the upcoming KubeOne releases
- We highly recommend using containerd instead of Docker for all newly
created clusters. You can opt-in to use containerd instead of Docker by
adding
containerRuntime
configuration to your KubeOne configuration manifest:For the configuration file reference, runcontainerRuntime: containerd: {}
kubeone config print --full
.
- Provisioning a Kubernetes or Amazon EKS-D cluster on Amazon Linux 2 will fail due to missing
crictl
binary. This bug has been fixed in the v1.2.1 release. - Upgrading an Amazon EKS-D cluster will fail due to kubeadm preflight checks failing. We're investigating the issue and you can follow the progress by checking the issue #1284.
- Add support for Kubernetes 1.20
- Add support for containerd container runtime (#1180, #1188, #1190, #1205, #1227, #1229)
- Kubernetes has announced deprecation of the Docker (dockershim) support in the Kubernetes 1.20 release. It's expected that Docker support will be removed in Kubernetes 1.22 or 1.23
- All newly created clusters running Kubernetes 1.21+ will use containerd instead of Docker by default
- Automated migration from Docker to containerd for existing clusters is currently not available, but is planned for one of the upcoming KubeOne releases
- Add support for Debian on control plane and static worker nodes (#1233)
- Debian is currently not supported by machine-controller, so it's not possible to use it on worker nodes managed by Kubermatic machine-controller
- Add alpha-level support for Amazon Linux 2 (#1167, #1173, #1175, #1176)
- Currently, all Kubernetes packages are installed by downloading binaries instead of using packages. Therefore, users are required to provide URLs using the new AssetConfiguration API to the CNI tarball, the Kubernetes Node binaries tarball (can be found in the Kubernetes CHANGELOG), and to the kubectl binary. Support for package managers is planned for the future.
- Add alpha-level AssetConfiguration API (#1170, #1171)
- The AssetConfiguration API controls how assets are pulled
- You can use it to specify custom images for containers or custom URLs for binaries
- Currently-supported assets are CNI, Kubelet and Kubeadm (by providing a node binaries tarball), Kubectl, the control plane images, and the metrics-server image
- Changing the binary assets (CNI, Kubelet, Kubeadm and Kubectl) currently works only on Amazon Linux 2. Changing the image assets works on all supported operating systems
- Add
Annotations
field to theProviderSpec
API used to add annotations to MachineDeployment objects (#1174) - Add support for defining Static Worker nodes in Terraform (#1166)
- Add scrape Prometheus headless service for NodeLocalDNS (#1165)
- [BREAKING/ACTION REQUIRED] Default values for OpenIDConnect has been corrected to match what's advised by the example configuration (#1235)
- Previously, there were no default values for the OpenIDConnect fields
- This might only affect users using the OpenIDConnect feature
- [BREAKING/ACTION REQUIRED] Disallow and deprecate the PodPresets feature (#1236)
- If you're upgrading a cluster that uses the PodPresets feature from Kubernetes 1.19 to 1.20, you have to disable the PodPresets feature in the KubeOne configuration manifest
- The PodPresets feature has been removed from Kubernetes 1.20 with no built-in replacement
- It's not possible to use the PodPresets feature starting with Kubernetes 1.20, however, it currently remains possible to use it for older Kubernetes versions
- The PodPresets feature will be removed from the KubeOneCluster API once Kubernetes 1.19 reaches End-of-Life (EOL)
- As an alternative to the PodPresets feature, Kubernetes recommends using the MutatingAdmissionWebhooks.
- Warn about
kubeone reset
requiring explicit confirmation starting with KubeOne 1.3 (#1252) - Build KubeOne using Go 1.16.1 (#1268, #1267)
- Stop Kubelet and reload systemd when removing binaries on CoreOS/Flatcar (#1176)
- Add rsync on CentOS and Amazon Linux (#1240)
- Drop mounting Flexvolume plugins into the OpenStack CCM. This fixes the issue with deploying the OpenStack CCM on the clusters running Flatcar Linux (#1234)
- Ensure all credentials are available to be used in addons. This fixes the issue with the Backups addon not working on non-AWS providers (#1248)
- Fix wrong legacy Docker version on RPM systems (#1191)
- Replace GoBetween load-balancer in vSphere Terraform example by keepalived (#1217)
- Fix DNS resolution issues for the Backups addon (#1179)
- [BREAKING/ACTION REQUIRED] Support for CoreOS has been removed from KubeOne and machine-controller (#1232)
- CoreOS has reached End-of-Life on May 26, 2020
- As an alternative to CoreOS, KubeOne supports Flatcar Linux
- We recommend migrating your CoreOS clusters to the Flatcar Linux or other supported operating system
v1.2.0-rc.1 - 2021-03-12
v1.2.0-rc.0 - 2021-03-08
- [BREAKING/ACTION REQUIRED] Starting with the KubeOne 1.3 release, the
kubeone reset
command will require an explicit confirmation like theapply
command- Running the
reset
command will require typingyes
to confirm the intention to unprovision/reset the cluster - The command can be automatically approved by using the
--auto-approve
flag - The
--auto-approve
flag has been already implemented as a no-op flag in this release - Starting with this release, running
kubeone reset
will show a warning about this change each time thereset
command is used
- Running the
- Warn about
kubeone reset
requiring explicit confirmation starting with KubeOne 1.3 (#1252)
v1.2.0-beta.1 - 2021-02-17
- [Breaking] Support for CoreOS has been removed from KubeOne and machine-controller
- CoreOS has reached End-of-Life on May 26, 2020
- As an alternative to CoreOS, KubeOne supports Flatcar Linux
- We recommend migrating your CoreOS clusters to the Flatcar Linux or other supported operating system
- [Breaking] Default values for OpenIDConnect has been corrected to match what's advised by the example configuration
- Previously, there were no default values for the OpenIDConnect fields
- This might only affect users using the OpenIDConnect feature
- [Breaking] Disallow and deprecate the PodPresets feature
- [Action Required] If you're upgrading a cluster that uses the PodPresets feature from Kubernetes 1.19 to 1.20, you have to disable the PodPresets feature in the KubeOne configuration manifest
- The PodPresets feature has been removed from Kubernetes 1.20 with no built-in replacement
- It's not possible to use the PodPresets feature starting with Kubernetes 1.20, however, it currently remains possible to use it for older Kubernetes versions
- The PodPresets feature will be removed from the KubeOneCluster API once Kubernetes 1.19 reaches End-of-Life (EOL)
- As an alternative to the PodPresets feature, Kubernetes recommends using the MutatingAdmissionWebhooks.
- Add support for Kubernetes 1.20
- Previously, we've shared that there is an issue affecting newly created clusters where the first control plane node is unhealthy/broken for the first 5-10 minutes. We've investigated the issue and found out that the issue can be successfully mitigated by restarting the first API server. We've implemented a task that automatically restarts the API server if it's affected by the issue (#1243, #1245)
- Add support for Debian on control plane and static worker nodes (#1233)
- Debian is currently not supported by machine-controller, so it's not possible to use it on worker nodes managed by machine-controller
- [Breaking] Default values for OpenIDConnect has been corrected to match what's advised by the example configuration (#1235)
- Previously, there were no default values for the OpenIDConnect fields
- This might only affect users using the OpenIDConnect feature
- [Breaking] Disallow and deprecate the PodPresets feature (#1236)
- [Action Required] If you're upgrading a cluster that uses the PodPresets feature from Kubernetes 1.19 to 1.20, you have to disable the PodPresets feature in the KubeOne configuration manifest
- The PodPresets feature has been removed from Kubernetes 1.20 with no built-in replacement
- It's not possible to use the PodPresets feature starting with Kubernetes 1.20, however, it currently remains possible to use it for older Kubernetes versions
- The PodPresets feature will be removed from the KubeOneCluster API once Kubernetes 1.19 reaches End-of-Life (EOL)
- As an alternative to the PodPresets feature, Kubernetes recommends using the MutatingAdmissionWebhooks.
- Add rsync on CentOS and Amazon Linux (#1240)
- Drop mounting Flexvolume plugins into the OpenStack CCM. This fixes the issue with deploying the OpenStack CCM on the clusters running Flatcar Linux (#1234)
- Ensure all credentials are available to be used in addons. This fixes the issue with the Backups addon not working on non-AWS providers (#1248)
- Update machine-controller to v1.25.0 (#1238)
- [Breaking] Support for CoreOS has been removed from KubeOne and machine-controller (#1232)
- CoreOS has reached End-of-Life on May 26, 2020
- As an alternative to CoreOS, KubeOne supports Flatcar Linux
- We recommend migrating your CoreOS clusters to the Flatcar Linux or other supported operating system
v1.2.0-beta.0 - 2021-01-27
- Kubernetes has announced deprecation of the Docker (dockershim) support in
the Kubernetes 1.20 release. It's expected that Docker support will be
removed in Kubernetes 1.22
- All newly created clusters running Kubernetes 1.21+ will be provisioned with containerd instead of Docker
- Automated migration from Docker to containerd is currently not available, but is planned for one of the upcoming KubeOne releases
- We highly recommend using containerd instead of Docker for all newly
created clusters. You can opt-in to use containerd instead of Docker by
adding
containerRuntime
configuration to your KubeOne configuration manifest:For the configuration file reference, runcontainerRuntime: containerd: {}
kubeone config print --full
.
- Provisioning Kubernetes 1.20 clusters results with one of the control plane
nodes being unhealthy/broken for the first 5-10 minutes after provisioning
the cluster. This causes KubeOne to fail to create MachineDeployment objects
because the
machine-controller-webhook
service can't be found. Also, one of the NodeLocalDNS pods might get stuck in the crash loop.- KubeOne currently still doesn't support Kubernetes 1.20. We do not recommend provisioning 1.20 clusters or upgrading existing clusters to Kubernetes 1.20
- We're currently investigating the issue. You can follow the progress in the issue #1222
- Add support for containerd container runtime (#1180, #1188, #1190, #1205, #1227, #1229)
- Kubernetes has announced deprecation of the Docker (dockershim) support in the Kubernetes 1.20 release. It's expected that Docker support will be removed in Kubernetes 1.22
- All newly created clusters running Kubernetes 1.21+ will default to containerd instead of Docker
- Automated migration from Docker to containerd is currently not available, but is planned for one of the upcoming KubeOne releases
- Fix wrong legacy Docker version on RPM systems (#1191)
- Replace GoBetween load-balancer in vSphere Terraform example by keepalived (#1217)
- Fix DNS resolution issues for the Backups addon (#1179)
v1.2.0-alpha.0 - 2020-11-27
- Add support for Amazon Linux 2 (#1167, #1173, #1175, #1176)
- Support for Amazon Linux 2 is currently in alpha.
- Currently, all Kubernetes packages are installed by downloading binaries instead of using packages. Therefore, users are required to provide URLs using the new AssetConfiguration API to the CNI tarball, the Kubernetes Node binaries tarball (can be found in the Kubernetes CHANGELOG), and to the kubectl binary. Support for packages is planned for the future.
- Add the AssetConfiguration API (#1170, #1171)
- The AssetConfiguration API controls how assets are pulled.
- You can use it to specify custom images for containers or custom URLs for binaries.
- Currently-supported assets are CNI, Kubelet and Kubeadm (by providing a node binaries tarball), Kubectl, the control plane images, and the metrics-server image.
- Changing the binary assets (CNI, Kubelet, Kubeadm and Kubectl) currently works only on Amazon Linux 2. Changing the image assets works on all supported operating systems.
- Add
Annotations
field to theProviderSpec
API used to add annotations to MachineDeployment objects (#1174) - Support for defining Static Worker nodes in Terraform (#1166)
- Add scrape Prometheus headless service for NodeLocalDNS (#1165)
- Stop Kubelet and reload systemd when removing binaries on CoreOS/Flatcar (#1176)
- Update Calico CNI to v3.16.5 (#1163)
v1.1.0 - 2020-11-13
Changelog since v1.0.5.
- [Breaking] Use Ubuntu 20.04 (Focal) in the example Hetzner Terraform config (#1102)
- It's highly recommended to bind the image by setting
var.image
to the image you're currently using to prevent the instances from being recreated the next time you run Terraform!
- It's highly recommended to bind the image by setting
- Implement the OverwriteRegistry functionality (#1145)
- This PR adds a new top-level API field
registryConfiguration
which controls how images used for components deployed by KubeOne and kubeadm are pulled from an image registry. - The
registryConfiguration.overwriteRegisty
field specifies a custom Docker registry to be used instead of the default one. - The
registryConfiguration.insecureRegistry
field configures Docker to consider the registry specified inregistryConfiguration.overwriteRegisty
as an insecure registry. - For example, if
registryConfiguration.overwriteRegisty
is set to127.0.0.1:5000
, image calledk8s.gcr.io/kube-apiserver:v1.19.3
would become127.0.0.1:5000/kube-apiserver:v1.19.3
. - Setting
registryConfiguration.overwriteRegisty
applies to all images deployed by KubeOne and kubeadm, including addons deployed by KubeOne. - Setting
registryConfiguration.overwriteRegisty
applies to worker nodes managed by machine-controller and KubeOne as well. - You can run
kubeone config print -f
for more details regarding the RegistryConfiguration API.
- This PR adds a new top-level API field
- Add external cloud controller manager support for VMware vSphere clusters (#1159)
- Add the cluster-autoscaler addon (#1103)
- Explicitly restart Kubelet when upgrading clusters running on Ubuntu (#1098)
- Merge CloudProvider structs instead of overriding them when the cloud provider is defined via Terraform (#1108)
- Update Flannel to v0.13.0 (#1135)
- Update WeaveNet to v2.7.0 (#1153)
- Update Hetzner Cloud Controller Manager (CCM) to v1.8.1 (#1149)
- This CCM release includes support for external LoadBalacners backed by Hetzner LoadBalancers
- [Breaking] Use Ubuntu 20.04 (Focal) in the example Hetzner Terraform config (#1102)
- It's highly recommended to bind the image by setting
var.image
to the image you're currently using to prevent the instances from being recreated the next time you run Terraform!
- It's highly recommended to bind the image by setting
- Ensure the example Hetzner Terraform config support both Terraform v0.12 and v0.13 (#1102)
- Update Azure example Terraform config to work with the latest versions of the Azure provider (#1059)
- Use Hetzner Load Balancers instead of GoBetween in the example Hetzner Terraform config (#1066)
v1.1.0-rc.0 - 2020-10-27
Changelog since v1.0.5.
- [Breaking] Use Ubuntu 20.04 (Focal) in the example Hetzner Terraform config (#1102)
- It's highly recommended to bind the image by setting
var.image
to the image you're currently using to prevent the instances from being recreated the next time you run Terraform!
- It's highly recommended to bind the image by setting
- Implement the OverwriteRegistry functionality (#1145)
- This PR adds a new top-level API field
registryConfiguration
which controls how images used for components deployed by KubeOne and kubeadm are pulled from an image registry. - The
registryConfiguration.overwriteRegisty
field can be used to specify a custom Docker registry to be used instead of the default one. - For example, if
registryConfiguration.overwriteRegisty
is set to127.0.0.1:5000
, image calledk8s.gcr.io/kube-apiserver:v1.19.3
would become127.0.0.1:5000/kube-apiserver:v1.19.3
. - Setting
registryConfiguration.overwriteRegisty
applies to all images deployed by KubeOne and kubeadm, including addons deployed by KubeOne. - You can run
kubeone config print -f
for more details regarding the RegistryConfiguration API.
- This PR adds a new top-level API field
- Add the cluster-autoscaler addon (#1103)
- Explicitly restart Kubelet when upgrading clusters running on Ubuntu (#1098)
- Merge CloudProvider structs instead of overriding them when the cloud provider is defined via Terraform (#1108)
- Update Flannel to v0.13.0 (#1135)
- Update Hetzner Cloud Controller Manager (CCM) to v1.7.0 (#1068)
- This CCM release includes support for external LoadBalacners backed by Hetzner LoadBalancers
- [Breaking] Use Ubuntu 20.04 (Focal) in the example Hetzner Terraform config (#1102)
- It's highly recommended to bind the image by setting
var.image
to the image you're currently using to prevent the instances from being recreated the next time you run Terraform!
- It's highly recommended to bind the image by setting
- Ensure the example Hetzner Terraform config support both Terraform v0.12 and v0.13 (#1102)
- Update Azure example Terraform config to work with the latest versions of the Azure provider (#1059)
- Use Hetzner Load Balancers instead of GoBetween in the example Hetzner Terraform config (#1066)
v1.0.5 - 2020-10-19
- Update machine-controller to v1.19.0 (#1141)
- This machine-controller release uses the Hyperkube Kubelet image for Flatcar worker nodes running Kubernetes 1.18, as the Poseidon Kubelet image repository doesn't publish 1.18 images any longer. This change ensures that you can provision or upgrade to Kubernetes 1.18.8+ on Flatcar.
v1.0.4 - 2020-10-16
- KubeOne now creates a dedicated secret with vSphere credentials for kube-controller-manager and vSphere Cloud Controller Manager (CCM). This is required because those components require the secret to adhere to the expected format.
- The new secret is called
vsphere-ccm-credentials
and is deployed in thekube-system
namespace. - This fix ensures that you can use all vSphere provider features, for example, volumes and having cloud provider metadata/labels applied on each node.
- If you're upgrading an existing vSphere cluster, you should take the following steps:
- Upgrade the cluster without changing the cloud-config. Upgrading the cluster creates a new Secret for kube-controller-manager and vSphere CCM.
- Change the cloud-config in your KubeOne Configuration Manifest to refer to the new secret.
- Upgrade the cluster again to apply the new cloud-config, or change it manually on each control plane node and restart kube-controller-manager and vSphere CCM (if deployed).
- Note: the cluster can be force-upgrade without changing the Kubernetes version. To do that, you can use the
--force-upgrade
flag withkubeone apply
, or the--force
flag withkubeone upgrade
.
- The new secret is called
- The example Terraform config for vSphere now has
disk.enableUUID
option enabled. This change ensures that you can mount volumes on the control plane nodes.- WARNING: If you're applying the latest Terraform configs on an existing infrastructure/cluster, an in-place upgrade of control plane nodes is required. This means that all control plane nodes will be restarted, which causes a downtime until all instances doesn't come up again.
- Don't stop Kubelet when upgrading Kubeadm on Flatcar (#1099)
- Create a dedicated secret with vSphere credentials for kube-controller-manager and vSphere Cloud Controller Manager (CCM) (#1128)
- Enable
disk.enableUUID
option in Terraform example configs for vSphere (#1130)
v1.0.3 - 2020-09-28
- This release includes a fix for Kubernetes 1.18.9 clusters failing to provision due to the unmet kubernetes-cni dependency.
- This release includes a fix for CentOS and RHEL clusters failing to provision due to missing Docker versions from the CentOS/RHEL 8 Docker repo.
- Add the
image
field for the Hetzner worker nodes (#1105)
- Use CentOS 7 Docker repo for both CentOS/RHEL 7 and 8 clusters (#1111)
v1.0.2 - 2020-09-02
- We do not recommend upgrading to Kubernetes 1.19.0 due to an upstream bug in kubeadm affecting Highly-Available clusters. The bug will be fixed in 1.19.1, which is scheduled for Wednesday, September 9th. Until 1.19.1 is not released, we highly recommend staying on 1.18.
- Update machine-controller to v1.17.1 (#1086)
- This machine-controller release uses Docker 19.03.12 instead of Docker 18.06 for worker nodes running Kubernetes 1.17 and newer.
- Due to an upstream issue, pod metrics are not available on worker nodes running Kubernetes 1.19 with Docker 18.06.
- If you experience any issues with pod metrics on worker nodes running Kubernetes 1.19, provisioned with an earlier machine-controller version, you might have to update machine-controller to v1.17.1 and then rotate the affected worker nodes.
v1.0.1 - 2020-08-31
- We do not recommend upgrading to Kubernetes 1.19.0 due to an upstream bug in kubeadm affecting Highly-Available clusters. The bug will be fixed in 1.19.1, which is scheduled for Wednesday, September 9th. Until 1.19.1 is not released, we highly recommend staying on 1.18.
- Include cloud-config in the generated KubeOne configuration manifest when it's required by validation (#1062)
- Properly apply labels when upgrading components. This fixes the issue with upgrade failures on clusters created with KubeOne v1.0.0-rc.0 and earlier (#1078)
- Fix race condition between kube-proxy and node-local-dns (#1058)
v1.0.0 - 2020-08-18
Changelog since v0.11.2. For changelog since v1.0.0-rc.1, please check the release notes
-
Upgrading to this release is highly recommended as v0.11 release doesn't support Kubernetes versions 1.16.11/1.17.7/1.18.4 and newer. Kubernetes versions older than 1.16.11/1.17.7/1.18.4 are affected by two CVEs and therefore it's strongly advised to use 1.16.11/1.17.7/1.18.4 or newer.
-
KubeOne now uses vanity domain
k8c.io/kubeone
.- The
go get
command to get KubeOne is nowGO111MODULE=on go get k8c.io/kubeone
.
- The
-
The
kubeone
AUR package has been moved to the official Arch Linux repositories. The AUR package has been removed in the favor of the official one. -
This release introduces the new KubeOneCluster v1beta1 API. The v1alpha1 API has been deprecated.
- It remains possible to use both APIs with all
kubeone
commands - The v1alpha1 manifest can be converted to the v1beta1 manifest using the
kubeone config migrate
command - All example configurations have been updated to the v1beta1 API
- More information about migrating to the new API and what has been changed can be found in the API migration document
- It remains possible to use both APIs with all
-
Default MTU for the Canal CNI depending on the provider
- AWS - 8951 (9001 AWS Jumbo Frame - 50 VXLAN bytes)
- GCE - 1410 (GCE specific 1460 bytes - 50 VXLAN bytes)
- Hetzner - 1400 (Hetzner specific 1450 bytes - 50 VXLAN bytes)
- OpenStack - 1400 (OpenStack specific 1450 bytes - 50 VXLAN bytes)
- Default - 1450
- If you're using KubeOneCluster v1alpha1 API, the default MTU is 1450 regardless of the provider
-
RHSMOfflineToken has been removed from the CloudProviderSpec. This and other relevant fields are now located in the OperatingSystemSpec
-
The KubeOneCluster manifest (config file) is now provided using the
--manifest
flag, such askubeone install --manifest config.yaml
. Providing it as an argument will result in an error -
The paths to the config files for PodNodeSelector feature (
.features.config.configFilePath
) and StaticAuditLog feature (.features.staticAuditLog.policyFilePath
) are now relative to the manifest path instead of to the working directory. This change might be breaking for some users. -
It's now possible to install Kubernetes 1.18.6 and 1.17.9 on CentOS 7, however, only Canal CNI is known to work properly. We are aware that the DNS and networking problems may still be present even with the latest versions. It remains impossible to install older versions of Kubernetes on CentOS 7.
-
Example Terraform configs for AWS now Use Ubuntu 20.04 (Focal) instead of Ubuntu 18.04
- It's highly recommended to bind the AMI by setting
var.ami
to the AMI you're currently using to prevent the instances from being recreated the next time you run Terraform!
- It's highly recommended to bind the AMI by setting
- We do not recommend upgrading to Kubernetes 1.19.0 due to an upstream bug in kubeadm affecting Highly-Available clusters. The bug will be fixed in 1.19.1, which is scheduled for Wednesday, September 9th. Until 1.19.1 is not released, we highly recommend staying on 1.18.
- It remains impossible to provision Kubernetes older than 1.18.6/1.17.9 on CentOS 7. CentOS 8 and RHEL are unaffected.
- Upgrading a cluster created with KubeOne v1.0.0-rc.0 or earlier fails to update components deployed by KubeOne (e.g. CNI, machine-controller). Please upgrade to KubeOne v1.0.1 or newer if you have clusters created with v1.0.0-rc.0 or earlier.
- Implement the
kubeone apply
command- The apply command is used to reconcile (install, upgrade, and repair) clusters
- More details about how to use the apply command can be found in the Cluster reconciliation (apply) document
- Implement the
kubeone config machinedeployments
command (#966)- The new command is used to generate a YAML manifest containing all MachineDeployment objects defined in the KubeOne configuration manifest and Terraform output
- The generated manifest can be used with kubectl if you want to create and modify MachineDeployments once the cluster is created
- Add the
kubeone proxy
command (#1035)- The
kubeone proxy
command launches a local HTTPS capable proxy (CONNECT method) to tunnel HTTPS request via SSH. This is especially useful in cases when the Kubernetes API endpoint is not accessible from the public internets.
- The
- Add the KubeOneCluster v1beta1 API (#894)
- Implemented automated conversion between v1alpha1 and v1beta1 APIs. It remains possible to use all
kubeone
commands with both v1alpha1 and v1beta1 manifests, however, migration to the v1beta1 manifest is recommended - Implement the Terraform integration for the v1beta1 API. Currently, the Terraform integration output format is the same for both APIs, but that might change in the future
- The kubeone config migrate command has been refactored to migrate v1alpha1 to v1beta1 manifests. The manifest is now provided using the --manifest flag instead of providing it as an argument. It's not possible to convert pre-v0.6.0 manifest to v1alpha1 anymore
- The example configurations are updated to the v1beta1 API
- Drop the leading 'v' in the Kubernetes version if it's provided. This fixes a bug causing provisioning to fail if the Kubernetes version starts with 'v'
- Implemented automated conversion between v1alpha1 and v1beta1 APIs. It remains possible to use all
- Automatic cluster repairs (#888)
- Detect and delete broken etcd members
- Detect and delete outdated corev1.Node objects
- Add ability to provision static worker nodes (#834)
- Check out the documentation to learn more about static worker nodes
- Add ability to skip cluster provisioning when running the
install
command using the--no-init
flag (#871) - Add support for Kubernetes 1.16.11, 1.17.7, 1.18.4 releases (#925)
- Add support for Ubuntu 20.04 (Focal) (#1005)
- Add support for CentOS 8 (#981)
- Add RHEL support (#918)
- Add support for Flatcar Linux (#879)
- Support for vSphere resource pools (#883)
- Support for Azure AZs (#883)
- Support for Flexvolumes on CoreOS and Flatcar (#885)
- Add the
ImagePlan
field to Azure Terraform integration (#947) - Add support for the PodNodeSelector admission controller (#920)
- Add the
PodPresets
feature (#837) - Add support for OpenStack external cloud controller manager (CCM) (#820)
- Add ability to change MTU for the Canal CNI using
.clusterNetwork.cni.canal.mtu
field (requires KubeOneCluster v1beta1 API) (#1005) - Add the Calico VXLAN addon (#972)
- More information about how to use this addon can be found on the docs website
- Add ability to use an external CNI plugin (#862)
- [Breaking] Replace positional argument for the config file with the global
--manifest
flag (#880) - [Breaking] Make paths to the config file for PodNodeSelector and StaticAuditLog features relative to the manifest path (#920)
- The default value for the
--manifest
flag iskubeone.yaml
- The default value for the
- KubeOne now uses vanity domain
k8c.io/kubeone
(#1008)- The
go get
command to get KubeOne is nowGO111MODULE=on go get k8c.io/kubeone
.
- The
- The
kubeone
AUR package has been moved to the official Arch Linux repositories. The AUR package has been removed in the favor of the official one (#971) - Default MTU for the Canal CNI depending on the provider (#1005)
- AWS - 8951 (9001 AWS Jumbo Frame - 50 VXLAN bytes)
- GCE - 1410 (GCE specific 1460 bytes - 50 VXLAN bytes)
- Hetzner - 1400 (Hetzner specific 1450 bytes - 50 VXLAN bytes)
- OpenStack - 1400 (OpenStack specific 1450 bytes - 50 VXLAN bytes)
- Default - 1450
- If you're using KubeOneCluster v1alpha1 API, the default MTU is 1450 regardless of the provider (#1016)
- Label all components deployed by KubeOne with the
kubeone.io/component: <component-name>
label (#1005) - Increase default number of task retries to 10 (#1020)
- Use the proper package revision for Kubernetes packages (#933)
- Hold the
docker-ce-cli
package on Ubuntu (#902) - Hold the
docker-ce
,docker-ce-cli
, and all Kubernetes packages on CentOS (#902) - Ignore etcd data if it is present on the control plane nodes (#874)
- This allows clusters to be restored from a backup
- machine-controller and machine-controller-webhook are bound to the control plane nodes (#832)
- KubeOne is now built using Go 1.15 (#1048)
- Verify that
crd.projectcalico.org
CRDs are established before proceeding with the Canal CNI installation (#994) - Add NodeRegistration object to the kubeadm v1beta2 JoinConfiguration for static worker nodes. Fix the issue with nodes not joining a cluster on AWS (#969)
- Unconditionally renew certificates when upgrading the cluster. Due to an upstream bug, kubeadm wasn't automatically renewing certificates for clusters running Kubernetes versions older than v1.17 (#990)
- Apply only addons with
.yaml
,.yml
and.json
extensions (#873) - Install
curl
before configuring repositories on Ubuntu instances (#945) - Stop copying kubeconfig to the home directory on control plane instances (#936)
- Fix the cluster provisioning issues caused by
docker-ce-cli
version mismatch (#896) - Remove hold from the docker-ce-cli package on upgrade (#941)
- This ensures clusters created with KubeOne v0.11 can be upgraded using KubeOne v1.0.0
- This fixes the issue with upgrading CoreOS/Flatcar clusters
- Force restart Kubelet on CentOS on upgrade (#988)
- Fixes the cluster provisioning for instances that don't have
curl
installed
- Fixes the cluster provisioning for instances that don't have
- Fix CoreOS host architecture detection (#882)
- Fix CoreOS/Flatcar cluster provisioning/upgrading: use the correct URL for the CNI plugins (#929)
- Fix the Kubelet service unit for CoreOS/Flatcar (#908, #909)
- Fix the CoreOS install and upgrade scripts (#904)
- Fix CoreOS/Flatcar provisioning issues for Kubernetes 1.18 (#895)
- Fix Kubelet version detection on Flatcar (#1032)
- Fix the gobetween script failing to install the
tar
package (#963)
- Update machine-controller to v1.16.1 (#1043)
- Update Canal CNI to v3.15.1 (#1005)
- Update NodeLocalDNSCache to v1.15.12 (#872)
- Update Docker versions. Minimal Docker version is 18.09.9 for clusters older than v1.17 and 19.03.12 for clusters running v1.17 and newer (#1002)
- This fixes the issue preventing users to upgrade Kubernetes clusters created with KubeOne v0.11 or earlier
- Update Packet Cloud Controller Manager (CCM) to v1.0.0 (#884)
- [Breaking] Use Ubuntu 20.04 (Focal) in the example Terraform scripts for AWS (#1001)
- It's highly recommended to bind the AMI by setting
var.ami
to the AMI you're currently using to prevent the instances from being recreated the next time you run Terraform!
- It's highly recommended to bind the AMI by setting
- Remove RHSMOfflineToken from the CloudProviderSpec (#883)
- RHSM settings are now located in the OperatingSystemSpec
v1.0.0-rc.1 - 2020-08-06
- KubeOne now uses vanity domain
k8c.io/kubeone
(#1008)- The
go get
command to get KubeOne is nowGO111MODULE=on go get k8c.io/kubeone
. - You may be required to specify a tag/branch name until v1.0.0 is not released, such as:
GO111MODULE=on go get k8c.io/[email protected]
.
- The
- [Breaking] Use Ubuntu 20.04 (Focal) in the example Terraform scripts for AWS (#1001)
- It's highly recommended to bind the AMI by setting
var.ami
to the AMI you're currently using to prevent the instances from being recreated the next time you run Terraform!
- It's highly recommended to bind the AMI by setting
- Add ability to change MTU for the Canal CNI using
.clusterNetwork.cni.canal.mtu
field (requires KubeOneCluster v1beta1 API) (#1005) - Add support for Ubuntu 20.04 (Focal) (#1005)
- KubeOne now uses vanity domain
k8c.io/kubeone
(#1008)- The
go get
command to get KubeOne is nowGO111MODULE=on go get k8c.io/kubeone
. - You may be required to specify a tag/branch name until v1.0.0 is not released, such as:
GO111MODULE=on go get k8c.io/[email protected]
.
- The
- Default MTU for the Canal CNI depending on the provider (#1005)
- AWS - 8951 (9001 AWS Jumbo Frame - 50 VXLAN bytes)
- GCE - 1410 (GCE specific 1460 bytes - 50 VXLAN bytes)
- Hetzner - 1400 (Hetzner specific 1450 bytes - 50 VXLAN bytes)
- OpenStack - 1400 (OpenStack specific 1450 bytes - 50 VXLAN bytes)
- Default - 1450
- If you're using KubeOneCluster v1alpha1 API, the default MTU is 1450 regardless of the provider (#1016)
- Label all components deployed by KubeOne with the
kubeone.io/component: <component-name>
label (#1005) - Increase default number of task retries to 10 (#1020)
- Update machine-controller to v1.16.0 (#1017)
- Update Canal CNI to v3.15.1 (#1005)
- [Breaking] Use Ubuntu 20.04 (Focal) in the example Terraform scripts for AWS (#1001)
- It's highly recommended to bind the AMI by setting
var.ami
to the AMI you're currently using to prevent the instances from being recreated the next time you run Terraform!
- It's highly recommended to bind the AMI by setting
v1.0.0-rc.0 - 2020-07-27
- This is the first release candidate for the KubeOne v1.0.0 release! We encourage everyone to test it and let us know if you have any questions or if you find any issues or bugs. You can create an issue on GitHub, reach out to us on our forums, or on
#kubeone
channel on Kubernetes Slack. - It's recommended to use this release instead of v0.11, as v0.11 doesn't support the latest Kubernetes patch releases. Older Kubernetes releases are affected by two CVEs and therefore it's strongly advised to use 1.16.11/1.17.7/1.18.4 or newer.
- Verify that
crd.projectcalico.org
CRDs are established before proceeding with the Canal CNI installation (#994)
- Update Docker versions. Minimal Docker version is 18.09.9 for clusters older than v1.17 and 19.03.12 for clusters running v1.17 and newer (#1002)
- This fixes the issue preventing users to upgrade Kubernetes clusters created with KubeOne v0.11 or earlier
v1.0.0-beta.3 - 2020-07-22
- It's recommended to use this release instead of v0.11, as v0.11 doesn't support the latest Kubernetes patch releases. Older Kubernetes releases are affected by two CVEs and therefore it's strongly advised to use 1.16.11/1.17.7/1.18.4 or newer.
- It's now possible to install Kubernetes 1.18.6/1.17.9/1.16.13 on CentOS 7, however, only Canal CNI is known to work properly. We are aware that the DNS and networking problems may still be present even with the latest versions. It remains impossible to install older versions of Kubernetes on CentOS 7.
- The
kubeone
AUR package has been moved to the official Arch Linux repositories. The AUR package has been removed in the favor of the official one (#971).
- Implement the
kubeone apply
command- The apply command is used to reconcile (install, upgrade, and repair) clusters
- The apply command is currently in beta, but we're encouraging everyone to test it and report any issues and bugs
- More details about how to use the apply command can be found in the Cluster reconciliation (apply) document
- Implement the
kubeone config machinedeployments
command (#966)- The new command is used to generate a YAML manifest containing all MachineDeployment objects defined in the KubeOne configuration manifest and Terraform output
- The generated manifest can be used with kubectl if you want to create and modify MachineDeployments once the cluster is created
- Add support for CentOS 8 (#981)
- Add the Calico VXLAN addon (#972)
- More information about how to use this addon can be found on the docs website
- The
kubeone
AUR package has been moved to the official Arch Linux repositories. The AUR package has been removed in the favor of the official one (#971)
- Add NodeRegistration object to the kubeadm v1beta2 JoinConfiguration for static worker nodes. Fix the issue with nodes not joining a cluster on AWS (#969)
- Unconditionally renew certificates when upgrading the cluster. Due to an upstream bug, kubeadm wasn't automatically renewing certificates for clusters running Kubernetes versions older than v1.17 (#990)
- Force restart Kubelet on CentOS on upgrade (#988)
- Fix the gobetween script failing to install the
tar
package (#963)
- Update machine-controller to v1.15.3 (#995)
- This release includes a fix for the machine-controller's NodeCSRApprover controller refusing to approve certificates for the GCP worker nodes
- This release includes support for CentOS 8 and RHEL 7 worker nodes
v1.0.0-beta.2 - 2020-07-03
- It's recommended to use this release instead of v0.11, as v0.11 doesn't support the latest Kubernetes patch releases. Older Kubernetes releases are affected by two CVEs and therefore it's strongly advised to use 1.16.11/1.17.7/1.18.4 or newer.
- See known issues for the v1.0.0-beta.1 release for more details.
- Add the
ImagePlan
field to Azure Terraform integration (#947)
- Install
curl
before configuring repositories on Ubuntu instances (#945)- Fixes the cluster provisioning for instances that don't have
curl
installed
- Fixes the cluster provisioning for instances that don't have
- Update machine-controller to v1.15.1 (#947)
v1.0.0-beta.1 - 2020-07-02
- It's recommended to use this release instead of v0.11, as v0.11 doesn't support the latest Kubernetes patch releases. Older Kubernetes releases are affected by two CVEs and therefore it's strongly advised to use 1.16.11/1.17.7/1.18.4 or newer.
- machine-controller is failing to sign Kubelet CertificateSingingRequests (CSRs) for worker nodes on GCP due to missing hostname in the Machine object. We are currently working on a fix. In meanwhile, you can sign the CSRs manually by following instructions from the Kubernetes docs.
- It remains impossible to provision Kubernetes 1.16.10+ clusters on CentOS 7. CentOS 8 and RHEL are unaffected. We are investigating the root cause of the issue.
- Remove hold from the docker-ce-cli package on upgrade (#941)
- This ensures clusters created with KubeOne v0.11 can be upgraded using KubeOne v1.0.0-beta.1 release
v1.0.0-beta.0 - 2020-07-01
- It's recommended to use this release instead of v0.11, as v0.11 doesn't support the latest Kubernetes patch releases. Older Kubernetes releases are affected by two CVEs and therefore it's strongly advised to use 1.16.11/1.17.7/1.18.4 or newer.
- machine-controller is failing to sign Kubelet CertificateSingingRequests (CSRs) for worker nodes on GCP due to missing hostname in the Machine object. We are currently working on a fix. In meanwhile, you can sign the CSRs manually by following instructions from the Kubernetes docs.
- It remains impossible to provision Kubernetes 1.16.10+ clusters on CentOS 7. CentOS 8 and RHEL are unaffected. We are investigating the root cause of the issue.
- Trying to upgrade cluster created with KubeOne v0.11.2 or older results in an error due to KubeOne failing to upgrade the
docker-ce-cli
package. This issue has been fixed in the v1.0.0-beta.1 release.
- Re-introduce the support for the kubernetes-cni package and use the proper revision for Kubernetes packages (#933)
- This change ensures operators can use KubeOne to install the latest Kubernetes releases on CentOS/RHEL
- Stop copying kubeconfig to the home directory on control plane instances (#936)
- This fixes the issue with upgrading CoreOS/Flatcar clusters
- Update machine-controller to v1.14.3 (#937)
v1.0.0-alpha.6 - 2020-06-19
- Fix CoreOS/Flatcar cluster provisioning/upgrading: use the correct URL for the CNI plugins (#929)
v1.0.0-alpha.5 - 2020-06-18
- This release includes support for Kubernetes releases 1.16.11, 1.17.7, and 1.18.4. Those releases use the CNI v0.8.6, which includes a fix for the CVE-2020-10749. It's strongly advised to upgrade your clusters and rotate worker nodes as soon as possible.
- The path to the config file for PodNodeSelector feature (
.features.config.configFilePath
) and StaticAuditLog feature (.features.staticAuditLog.policyFilePath
) are now relative to the manifest path instead of to the working directory. This change might be breaking for some users.
- Currently it's impossible to install Kubernetes versions older than 1.16.11/1.17.7/1.18.4 on CentOS/RHEL, due to an issue with packages.
- Currently it's impossible to install Kubernetes on CentOS 7. The 1.16.11 release is not working due to an upstream issue with the kube-proxy component, while newer releases are having DNS problems which we are investigating.
- Currently it's impossible to install Kubernetes on CoreOS/Flatcar due to an incorrect URL to the CNI plugin. The fix has been already merged and will be included in the upcoming release.
- Add RHEL support (#918)
- Add support for the PodNodeSelector admission controller (#920)
- Add support for Kubernetes 1.16.11, 1.17.7, 1.18.4 releases (#925)
- BREAKING: Make paths to the config file for PodNodeSelector and StaticAuditLog features relative to the manifest path (#920)
- Update machine-controller to v1.14.2 (#918)
v1.0.0-alpha.4 - 2020-05-21
v1.0.0-alpha.3 - 2020-05-21
- In the v1.0.0-alpha.2 release, we didn't explicitly hold the
docker-ce-cli
package, which means it can be upgraded to a newer version. Upgrading thedocker-ce-cli
package would render the Kubernetes cluster unusable because of the version mismatch between Docker daemon and CLI. - If you already provisioned a cluster using the v1.0.0-alpha.2 release, please hold the
docker-ce-cli
package to prevent it from being upgraded:- Ubuntu: run the following command over SSH on all control plane instances:
sudo apt-mark hold docker-ce-cli
- CentOS: we've started using
yum versionlock
to handle package locks. The best way to set it up on your control plane instances is to runkubeone upgrade -f
with the exact same config that's currently running.
- Ubuntu: run the following command over SSH on all control plane instances:
- The CoreOS cluster provisioning has been broken in this release due to the incorrect formatted Kubelet service file. This issue has been fixed in the v1.0.0-alpha.4 release.
- Hold the
docker-ce-cli
package on Ubuntu (#902) - Hold the
docker-ce
,docker-ce-cli
, and all Kubernetes packages on CentOS (#902) - Fix the CoreOS install and upgrade scripts (#904)
v1.0.0-alpha.2 - 2020-05-20
- This alpha version fixes the provisioning failures caused by
docker-ce-cli
version mismatch. The older alpha release are not working anymore (#896) machine-controller
must be updated to v1.14.0 or newer on existing clusters or otherwise newly created worker nodes will not work properly. Themachine-controller
can be updated on one of the following ways:- (Recommended) Run
kubeone upgrade -f
with the exact same config that's currently running - Run
kubeone install
with the exact same config that's currently running - Update the
machine-controller
andmachine-controller-webhook
deployments manually
- (Recommended) Run
- This release introduces the new KubeOneCluster v1beta1 API. The v1alpha1 API has been deprecated.
- It remains possible to use both APIs with all
kubeone
commands - The v1alpha1 manifest can be converted to the v1beta1 manifest using the
kubeone config migrate
command - All example configurations have been updated to the v1beta1 API
- It remains possible to use both APIs with all
- The
docker-ce-cli
package is not put on hold after it's installed. Upgrading thedocker-ce-cli
package would render the Kubernetes cluster unusable because of the version mismatch between Docker daemon and CLI. It's highly recommended to upgrade to KubeOne v1.0.0-alpha.3. - If you already provisioned a cluster using the v1.0.0-alpha.2 release, please hold the
docker-ce-cli
package to prevent it from being upgraded:- Ubuntu: run the following command over SSH on all control plane instances:
sudo apt-mark hold docker-ce-cli
- CentOS: we've started using
yum versionlock
to handle package locks. The best way to set it up on your control plane instances is to runkubeone upgrade -f
with the exact same config that's currently running.
- Ubuntu: run the following command over SSH on all control plane instances:
- The CoreOS cluster provisioning has been broken in this release due to the incorrect formatted Kubelet service file. This issue has been fixed in the v1.0.0-alpha.4 release.
- Add the KubeOneCluster v1beta1 API (#894)
- Implemented automated conversion between v1alpha1 and v1beta1 APIs. It remains possible to use all
kubeone
commands with both v1alpha1 and v1beta1 manifests, however, migration to the v1beta1 manifest is recommended - Implement the Terraform integration for the v1beta1 API. Currently, the Terraform integration output format is the same for both APIs, but that might change in the future
- The kubeone config migrate command has been refactored to migrate v1alpha1 to v1beta1 manifests. The manifest is now provided using the --manifest flag instead of providing it as an argument. It's not possible to convert pre-v0.6.0 manifest to v1alpha1 anymore
- The example configurations are updated to the v1beta1 API
- Drop the leading 'v' in the Kubernetes version if it's provided. This fixes a bug causing provisioning to fail if the Kubernetes version starts with 'v'
- Implemented automated conversion between v1alpha1 and v1beta1 APIs. It remains possible to use all
- Fix the cluster provisioning issues caused by
docker-ce-cli
version mismatch (#896) - Fix CoreOS/Flatcar provisioning issues for Kubernetes 1.18 (#895)
- Update machine-controller to v1.14.0 (#899)
v1.0.0-alpha.1 - 2020-05-11
- The KubeOneCluster manifest (config file) is now provided using the
--manifest
flag, such askubeone install --manifest config.yaml
. Providing it as an argument will result in an error - RHSMOfflineToken has been removed from the CloudProviderSpec. This and other relevant fields are now located in the OperatingSystemSpec
- Packet Cloud Controller Manager (CCM) has been updated to v1.0.0. This update fixes the issue preventing users to provision new Packet clusters
- Initial support for Flatcar Linux (#879)
- Currently, Flatcar Linux is supported only on AWS worker nodes
- Support for vSphere resource pools (#883)
- Support for Azure AZs (#883)
- Support for Flexvolumes on CoreOS and Flatcar (#885)
- Automatic cluster repairs (#888)
- Detect and delete broken etcd members
- Detect and delete outdated corev1.Node objects
- [Breaking] Replace positional argument for the config file with the global
--manifest
flag (#880)- The default value for the
--manifest
flag iskubeone.yaml
- The default value for the
- Ignore etcd data if it is present on the control plane nodes (#874)
- This allows clusters to be restored from a backup
- Fix CoreOS host architecture detection (#882)
- Update machine-controller to v1.13.1 (#885)
- Update Packet Cloud Controller Manager (CCM) to v1.0.0 (#884)
- Remove RHSMOfflineToken from the CloudProviderSpec (#883)
- RHSM settings are now located in the OperatingSystemSpec
v1.0.0-alpha.0 - 2020-04-22
- Add support for OpenStack external cloud controller manager (CCM) (#820)
- Add
Untaint
API field to remove default taints from the control plane nodes (#823) - Add the
PodPresets
feature (#837) - Add ability to provision static worker nodes (#834)
- Add ability to use an external CNI plugin (#862)
- Add ability to skip cluster provisioning when running the
install
command using the--no-init
flag (#871)
- machine-controller and machine-controller-webhook are bound to the control plane nodes (#832)
- Apply only addons with
.yaml
,.yml
and.json
extensions (#873)
v0.11.2 - 2020-05-21
- This version fixes the provisioning failures caused by
docker-ce-cli
version mismatch. The older releases are not working anymore (#907) machine-controller
must be updated to v1.11.3 on existing clusters or otherwise newly created worker nodes will not work properly. Themachine-controller
can be updated on one of the following ways:- (Recommended) Run
kubeone upgrade -f
with the exact same config that's currently running - Run
kubeone install
with the exact same config that's currently running - Update the
machine-controller
andmachine-controller-webhook
deployments manually
- (Recommended) Run
- Bind
docker-ce-cli
to the same version asdocker-ce
(#907) - Fix CoreOS install and upgrade scripts (#907)
- Update machine-controller to v1.11.3 (#907)
v0.11.1 - 2020-04-08
- Add support for Kubernetes 1.18 (#841)
- Ensure machine-controller CRDs are Established before deploying MachineDeployments (#824)
- Fix
leader_ip
parsing from Terraform (#819)
- Update machine-controller to v1.11.1 (#808)
- NodeCSRApprover controller is enabled by default to automatically approve CSRs for kubelet serving certificates
- machine-controller types are updated to include recently added fields
- Terraform example scripts are updated with the new fields
v0.11.0 - 2020-03-05
Changelog since v0.10.0. For changelog since v0.11.0-beta.3, please check the release notes
- Kubernetes 1.14 clusters are not supported as of this release because 1.14 isn't supported by the upstream anymore
- It remains possible and is advisable to upgrade 1.14 clusters to 1.15
- Currently, it also remains possible to provision 1.14 clusters, but that can be dropped at any time and it'll not be fixed if it stops working
- As of this release, it is not possible to upgrade 1.13 clusters to 1.14
- Please use an older version of KubeOne in the case you need to upgrade 1.13 clusters
- The AWS Terraform configuration has been refactored a in backward-incompatible way (#729)
- Terraform now handles setting up subnets
- All resources are tagged ensuring all cluster features offered by AWS CCM are supported
- The security of the setup has been increased
- Access to nodes and the Kubernetes API is now going over a bastion host
- The
aws-private
configuration has been removed - Check out the new Terraform configuration for more details
- Add support for Kubernetes 1.17
- Fix cluster upgrade failures when upgrading from 1.16 to 1.17 (#764)
- Add support for ARM64 clusters (#783)
- Add ability to deploy Kubernetes manifests on the provisioning time (KubeOne Addons) (#782)
- Add the
kubeone status
command which checks the health of the cluster, API server andetcd
(#734) - Add support for NodeLocalDNSCache (#704)
- Add ability to divert access to the Kubernetes API over SSH tunnel (#714)
- Add support for sourcing proxy settings from Terraform output (#698)
- Persist configured proxy in the system package managers (#749)
- [Breaking] The AWS Terraform configuration has been refactored (#729)
- The KubeOneCluster manifests is now parsed strictly (#802)
- The leader instance can be defined declarative using API (#790)
- Make vSphere Cloud Controller Manager read credentials from a Secret instead from
cloud-config
(#724)
- Fix CentOS cluster provisioning (#770)
- Fix AWS shared credentials file handling (#806)
- Fix credentials handling if
.cloudProvider.Name
isnone
(#696) - Fix upgrades not determining hostname correctly causing upgrades to fail (#708)
- Fix
kubeone reset
failing to reset the cluster (#727) - Fix
configure-cloud-routes
bug for AWS causingkube-controller-manager
to log warnings (#725) - Disable validation of replicas count in the workers definition (#775)
- Proxy settings defined in the config have precedence over those defined in Terraform (#760)
- Update machine-controller to v1.9.0 (#774)
- Update Canal CNI to v3.10 (#718)
- Update metrics-server to v0.3.6 (#720)
- Update DigitalOcean Cloud Controller Manager to v0.1.21 (#722)
- Update Hetzner Cloud Controller Manager to v1.5.0 (#726)
- Remove ability to upgrade 1.13 clusters to 1.14 (#764)
- Removed FlexVolume support from the Canal CNI (#756)
- GCE clusters must be configured as Regional to work properly (#732)
v0.11.0-beta.3 - 2019-12-20
- Kubernetes 1.14 clusters are not supported as of this release because 1.14 isn't supported by the upstream anymore
- It remains possible and is advisable to upgrade 1.14 clusters to 1.15
- Currently, it also remains possible to provision 1.14 clusters, but that can be dropped at any time and it'll not be fixed if it stops working
- As of this release, it is not possible to upgrade 1.13 clusters to 1.14
- Please use an older version of KubeOne in the case you need to upgrade 1.13 clusters
- Add support for Kubernetes 1.17
- Fix cluster upgrade failures when upgrading from 1.16 to 1.17 (#764)
- Remove ability to upgrade 1.13 clusters to 1.14 (#764)
v0.11.0-beta.2 - 2019-12-12
- Proxy settings defined in the config have precedence over those defined in Terraform (#760)
v0.11.0-beta.1 - 2019-12-10
- Persist configured proxy in the system package managers (#749)
- Fix
kubeone status
reporting wrongetcd
status when the hostname is not provided by Terraform or config (#753)
- Removed FlexVolume support from the Canal CNI (#756)
v0.11.0-beta.0 - 2019-11-28
- The AWS Terraform configuration has been refactored a in backward-incompatible way (#729)
- Terraform now handles setting up subnets
- All resources are tagged ensuring all cluster features offered by AWS CCM are supported
- The security of the setup has been increased
- Access to nodes and the Kubernetes API is now going over a bastion host
- The
aws-private
configuration has been removed - Check out the new Terraform configuration for more details
- Add the
kubeone status
command which checks the health of the cluster, API server andetcd
(#734) - Add support for NodeLocalDNSCache (#704)
- Add ability to divert access to the Kubernetes API over SSH tunnel (#714)
- Add support for sourcing proxy settings from Terraform output (#698)
- [Breaking] The AWS Terraform configuration has been refactored (#729)
- Make vSphere Cloud Controller Manager read credentials from a Secret instead from
cloud-config
(#724)
- Fix credentials handling if
.cloudProvider.Name
isnone
(#696) - Fix upgrades not determining hostname correctly causing upgrades to fail (#708)
- Fix
kubeone reset
failing to reset the cluster (#727) - Fix
configure-cloud-routes
bug for AWS causingkube-controller-manager
to log warnings (#725)
- Update machine-controller to v1.8.0 (#736)
- Update Canal CNI to v3.10 (#718)
- Update metrics-server to v0.3.6 (#720)
- Update DigitalOcean Cloud Controller Manager to v0.1.21 (#722)
- Update Hetzner Cloud Controller Manager to v1.5.0 (#726)
- GCE clusters must be configured as Regional to work properly (#732)
v0.10.0 - 2019-10-09
- Kubernetes 1.13 clusters are not supported as of this release because 1.13 isn't supported by the upstream anymore
- It remains possible to upgrade 1.13 clusters to 1.14 and is strongly advised
- Currently, it also remains possible to provision 1.13 clusters, but that can be dropped at any time and it'll not be fixed if it stops working
- KubeOne now uses Go modules! 🎉 (#550)
- This should not introduce any breaking changes
- If you're using
go get
to obtain KubeOne, you have to enable support for Go modules by setting theGO111MODULE
environment variable toon
- You can obtain KubeOne v0.10.0 using the following
go get
command:go get github.com/kubermatic/[email protected]
kubectl scale
is not working withkubectl
v1.15 to due an upstream issue. Please upgrade tokubectl
v1.16 if you want to use thekubectl scale
command.
- Add support for Kubernetes 1.16
- Add support for sourcing credentials and
cloud-config
file from a YAML file (#623, #641) - Add the
StaticAuditLog
feature used to configure the audit log backend (#631) - Add
SystemPackages
API field used to control configuration of repositories and packages (#670) - Add support for managing clusters over a bastion host (#567, #651)
- Add support for specifying OpenStack Tenant ID using the
OS_TENANT_ID
environment variable (#551) - Add ability to configure static networking for worker nodes (#606)
- Add taints on control plane nodes by default (#564)
- Add ability to apply labels on the Node object using the
.workers.providerSpec.Labels
field (#677) - Add ability to apply taints on the worker nodes using the
.workers.providerSpec.Taints
field (#678) - Add an optional
rootDiskSizeGB
field to the worker spec for OpenStack (#549) - Add an optional
nodeVolumeAttachLimit
field to the worker spec for OpenStack (#572) - Add an optional
TrustDevicePath
field to the worker spec for OpenStack (#686) - Add optional
BillingCycle
andTags
fields to the worker spec for Packet (#686) - Add ability to use AWS spot instances for worker nodes using the
isSpotInstance
field (#686) - Add support for Hetzner Private Networking (#596)
- Add
ShortNames
andAdditionalPrinterColumns
for Cluster-API CRDs (#689) - Add an example KubeOne Ansible playbook (#576)
- Fix
kubeone install
andkubeone upgrade
generatingv1beta1
instead ofv1beta2
kubeadm
configuration file for 1.15 and 1.16 clusters (#670) - Fix cluster provisioning failures when the
DynamicAuditLog
feature is enabled (#630) - Fix
kubeone reset
not retrying deleting worker nodes on error (#639) - Fix
kubeone reset
not skipping deleting worker nodes if themachine-controller
CRDs are not deployed (#683) - Fix Terraform integration not respecting multiple workerset definitions from
output.tf
(#568) - Fix
kubeone install
failing if Terraform output is not provided (#574) - Flannel CNI is forced use an internal network if it's available (#598)
- Update
machine-controller
to v1.5.7 (#682) - Update DigitalOcean Cloud Controller Manager (CCM) to v0.1.16 (#591)
- Write proxy configuration to the
/etc/environment
file (#687, #688) - Fix proxy configuration file (
/etc/kubeone/proxy-env
) generation (#650) - Fix
machine-controller
andmachine-controller-webhook
deployments not receiving the proxy configuration (#657)
- Fix GCE provisioning failure when using a longer cluster name with the example Terraform script (#607)
- Remove
TemplateNetName
field from the vSphere workers spec (#624) - Remove the old KubeOne configuration API (#626)
- This should not affect you unless you're using the pre-v0.6.0 configuration API manually
- The
kubeone config migrate
command is still available, but might be deleted at any time
v0.10.0-alpha.3 - 2019-08-22
- Update
machine-controller
to v1.5.3 (#633)
v0.10.0-alpha.2 - 2019-08-21
- Fix cluster provisioning failures when DynamicAuditLog feature is enabled (#630)
- Update
machine-controller
to v1.5.2 (#624)
- Remove
TemplateNetName
field from the vSphere workers spec (#624) - Remove the old KubeOne configuration API (#626)
v0.10.0-alpha.1 - 2019-08-16
- Add ability to configure static networking for worker nodes (#606)
- Flannel CNI is forced use an internal network if it's available (#598)
- Update
machine-controller
to v1.5.1 (#602) - Update DigitalOcean Cloud Controller Manager (CCM) to v0.1.16 (#591)
v0.10.0-alpha.0 - 2019-07-17
- KubeOne now uses Go modules! 🎉 (#550)
- This should not introduce any breaking change
- If you're using
go get
to obtain KubeOne, you may have to enable support for Go modules by setting theGO111MODULE
environment variable toon
- Add support for SSH over a bastion host (#567)
- Add an optional
rootDiskSizeGB
field to the worker spec for OpenStack (#549) - Add an optional
nodeVolumeAttachLimit
field to the worker spec for OpenStack (#572) - Add support for specifying OpenStack Tenant ID using the
OS_TENANT_ID
environment variable (#551) - Add an example KubeOne Ansible playbook (#576)
- Fix Terraform integration not respecting multiple workerset definitions from
output.tf
(#568) - Fix
install
failing if Terraform output is not provided (#574) - Update
machine-controller
to v1.4.2 (#572) - Control plane nodes are now tainted by default (#564)
v0.9.2 - 2019-07-04
- Fix the CNI plugin URL for cluster upgrades on CoreOS (#554)
- Fix
kubelet
binary upgrade failure on CoreOS because of binary lock (#556)
v0.9.1 - 2019-07-03
- Fix
.ClusterNetwork.PodSubnet
not being respected when using the Weave-Net CNI plugin (#540) - Fix
kubeadm
preflight check failure (IsDockerSystemdCheck
) on Ubuntu and CoreOS by making Docker usesystemd
cgroups driver (#536, #541) - Fix
kubeadm
preflight check failure on CentOS due tokubelet
service not being enabled (#541)
v0.9.0 - 2019-07-02
- The Terraform integration now requires Terraform v0.12+
- Please see the official Upgrading to Terraform v0.12 document to find out how to update your Terraform scripts for v0.12
- The example Terraform scripts coming with KubeOne are already updated for v0.12
- KubeOne is not able to parse output of
terraform output
generated with Terraform v0.11 or older anymore - The Terraform output template (
output.tf
) has been changed and KubeOne is not able to parse the old template anymore. You can check the output template used by example Terraform scripts as a reference
- Add support for Kubernetes 1.15 (#486)
- Add support for Microsoft Azure (#469)
- Add
kubeone completion
command for generating the shell completion scripts forbash
andzsh
(#484) - Add
kubeone document
command for generating man pages and KubeOne documentation (#484) - Add support for reading Terraform output directly from the directory (#495)
- Add missing fields to the workers specification API (#499)
- [BREAKING] KubeOne Terraform integration now uses Terraform v0.12+ (#466)
- [BREAKING] Change Terraform output template to conform with the KubeOneCluster API (#503)
- Fix Docker not starting properly for some providers on Ubuntu (#512)
- Fix
kubeone reset
failing if a MachineSet or Machine object has been already deleted and include more details in the error messages (#508) - Fix ability to read Terraform output from the standard input (
stdin
) (#479) - Update
machine-controller
to v1.3.0 (#499) - Update DigitalOcean Cloud Controller Manager to v0.1.15 (#516)
- Use Docker 18.09.7 when provisioning new clusters (#517)
- Configure proxy for
kubelet
on control plane nodes if proxy settings are provided (#496) - Configure proxy on worker nodes if proxy settings are provided (#490)
- Make GoBetween Load balancer configuration script work on all operating systems and fix minor bugs (#494)
v0.8.0 - 2019-05-30
- Add support for VMware vSphere (#428)
v0.7.0 - 2019-05-28
- Add WeaveNet as an additional CNI plugin (#432)
- Add the
--remove-binaries
flag to thekubeone reset
command that removes Kubernetes binaries (#450)
- Fix
kubeone reset
failing if noMachineDeployment
orMachine
object exist (#450) - Update
machine-controller
to v1.1.8 (#454)
v0.6.2 - 2019-05-13
- Fix a missing JSON tag on the
Name
field, so specifyingname
in lowercase in the manifest works as expected (#439) - Fix failing to disable SELinux on CentOS if it's already disabled (#443)
- Fix setting permissions on the remote KubeOne directory when the
$USER
environment variable isn't set (#443)
v0.6.1 - 2019-05-09
- Provide the
--kubelet-preferred-address-types
flag to metrics-server, so it works on all providers (#424)
v0.6.0 - 2019-05-08
We're excited to announce that as of this release KubeOne is in beta! We have the new backwards compatibility policy going in effect as of this release.
Check out the documentation for this release to find out how to get started with KubeOne.
- This release introduces the new KubeOneCluster API. The new API is supposed to the improve user experience and bring
many new possibilities, like API versioning.
- Old KubeOne configuration manifests will not work as of this release!
- To continue using KubeOne, you need to migrate your existing manifests to the new KubeOneCluster API. Follow the migration guidelines to find out how to migrate.
- [BREAKING] Implement and migrate to the KubeOneCluster API (#343, #353, #360, #379, #390)
- Implement the
config migrate
command for migrating old configuration manifests to KubeOneCluster manifests (#408) - Implement the
config print
command for printing an example KubeOneCluster manifest (#412, #415) - Enable scaling subresource for MachineDeployments (#334)
- Deploy
metrics-server
by default (#338, #351) - Deploy external cloud controller manager for DigitalOcean, Hetzner, and Packet (#364)
- Implement Terraform integration for Hetzner (#331)
- Add support for OpenIDConnect configuration (#344)
- Add support for Packet provider (#384)
- Patch CoreDNS deployment to work with external cloud controller manager taints (#362)
- Fix a typo in vSphere CloudProviderName (#339)
- Expose
ssh_username
variable in the Terraform output (#350) - Pass
-external-cloud-provider
flag tomachine-controller
when external cloud provider is enabled (#361) - Update
machine-controller
to v1.1.5 (#378) - Don't wait for
machine-controller
if it's not deployed (#392) - Deploy instance with a load balancer on OpenStack. General improvements to OpenStack support (#401)
- Parse all parameters from Terraform output for DigitalOcean (#370)
v0.5.0 - 2019-04-03
- Add support for Kubernetes 1.14
- Add support for upgrading from Kubernetes 1.13 to 1.14
- Add support for Google Compute Engine (#307, #317)
- Update machine-controller when upgrading the cluster (#304)
- Add timeout after upgrading each node to let nodes to settle down (#316, #319)
- Deploy machine-controller v1.1.2 on the new clusters (#317)
- Creating MachineDeployments and upgrading nodes tasks are repeated three times on the failure (#328)
- Allow upgrading to the same Kubernetes version (#315)
- Allow the custom VPC to be used with the example AWS Terraform scripts and switch to the T3 instances (#306)
v0.4.0 - 2019-03-21
- In #264 the environment variable names were changed to match names used by Terraform. In order for KubeOne to correctly fetch credentials you need to use the following variables:
DIGITALOCEAN_TOKEN
instead ofDO_TOKEN
OS_USERNAME
instead ofOS_USER_NAME
HCLOUD_TOKEN
instead ofHZ_TOKEN
- The
cloud-config
file is now required for OpenStack clusters. Validation will fail if thecloud-config
is not provided. See the OpenStack quickstart for details how thecloud-config
file should look like. - Ark integration is removed from KubeOne in #265. You need to remove the
backups
section from the existing configuration files. If you wish to deploy Ark on new clusters you have to do it manually.
- Add support for enabling DynamicAuditing (#261)
- Deploy machine-controller v1.0.7 on the new clusters (#247)
- Automatically download the Kubeconfing file after install (#248)
- Default
--destory-workers
totrue
for thekubeone reset
command (#252) - Improve OpenStack Terraform scripts (#253)
- The
cloud-config
file for OpenStack is required (#253) - The environment variable names are changed to match names used by Terraform (#264)
- Option for deploying Ark is removed (#265)
v0.3.0 - 2019-03-08
- If you're using
provider.Name
external
to configure control plane nodes to work with external CCM you need to:- Set
provider.Name
to name of the cloud provider you're using or tonone
(seeconfig.yaml.dist
for supported values), - Set
provider.External
totrue
. - Note: using external CCM is not supported and is currently not working as expected.
- Set
- If you're using
provider.Name
none
:- If you're deploying
machine-controller
you need to setMachineController.Provider
to name of the cloud provider you're using for worker nodes, - Otherwise you need to set
MachineController.Deploy
tofalse
, so validation passes.
- If you're deploying
- Add support for cluster upgrades (#206, #211, #214)
- Add support for enabling PodSecurityPolicy (#218)
- Add
kubeone version
command (#221) - Add initial support for OpenStack worker nodes (#209)
- Fix
none
provider bugs and ensure correct behavior (#227) - External Cloud Controller Manager is now enabled by setting
Provider.External
totrue
instead of settingProvider.Name
toexternal
(#230, #237) - Set correct Internal address on nodes with 2+ network interfaces (#240)
- Deploy
machine-controller
on uninitialized nodes (#241)
v0.2.0 - 2019-02-15
- Add support for workers creation on DigitalOcean (#153)
- Add support for CentOS (#154)
- Add support for external cloud controller managers (#161)
- Add Terraform scripts for OpenStack (#170)
- Add support for configuring proxy for Docker daemon,
apt-get
/yum
andcurl
(#182)
- AWS credentials can be obtained from the profile file (#156)
- Fix Etcd init on clusters with more than one network interface (#160)
- Allow
provider.Name
to be specified for providers without external Cloud Controller Manager (#161) - Refactor KubeOne CLI (#177)
- Global flags are parsed correctly regardless of their placement in the command
--verbose
and--tfjson
are global flags- The main package is moved to the root of project (
github.com/kubermatic/kubeone
)
- Deploy
machine-controller
v1.0.4 instead of v0.10.0 to fix CVE-2019-5736 (#191) - Deploy Docker 18.09.2 on Ubuntu to fix CVE-2019-5736 (#193)