diff --git a/website/content/v1.8/advanced/proprietary-kernel-modules.md b/website/content/v1.8/advanced/proprietary-kernel-modules.md index 21e3dc308c5..9a42ef4d7f2 100644 --- a/website/content/v1.8/advanced/proprietary-kernel-modules.md +++ b/website/content/v1.8/advanced/proprietary-kernel-modules.md @@ -61,5 +61,5 @@ aliases: 3. Deploying to your cluster ```bash - talosctl upgrade --image ghcr.io/your-username/talos-installer: --preserve=true + talosctl upgrade --image ghcr.io/your-username/talos-installer: ``` diff --git a/website/content/v1.8/kubernetes-guides/configuration/ceph-with-rook.md b/website/content/v1.8/kubernetes-guides/configuration/ceph-with-rook.md index f475ce30d41..49bce7271c6 100644 --- a/website/content/v1.8/kubernetes-guides/configuration/ceph-with-rook.md +++ b/website/content/v1.8/kubernetes-guides/configuration/ceph-with-rook.md @@ -103,7 +103,6 @@ ceph-filesystem rook-ceph.cephfs.csi.ceph.com Delete Immediate ## Talos Linux Considerations It is important to note that a Rook Ceph cluster saves cluster information directly onto the node (by default `dataDirHostPath` is set to `/var/lib/rook`). -If running only a single `mon` instance, cluster management is little bit more involved, as any time a Talos Linux node is reconfigured or upgraded, the partition that stores the `/var` [file system]({{< relref "../../learn-more/architecture#the-file-system" >}}) is wiped, but the `--preserve` option of [`talosctl upgrade`]({{< relref "../../reference/cli#talosctl-upgrade" >}}) will ensure that doesn't happen. By default, Rook configues Ceph to have 3 `mon` instances, in which case the data stored in `dataDirHostPath` can be regenerated from the other `mon` instances. So when performing maintenance on a Talos Linux node with a Rook Ceph cluster (e.g. upgrading the Talos Linux version), it is imperative that care be taken to maintain the health of the Ceph cluster. diff --git a/website/content/v1.8/kubernetes-guides/configuration/local-storage.md b/website/content/v1.8/kubernetes-guides/configuration/local-storage.md index 738fd973ba5..ef3e9529e05 100644 --- a/website/content/v1.8/kubernetes-guides/configuration/local-storage.md +++ b/website/content/v1.8/kubernetes-guides/configuration/local-storage.md @@ -6,8 +6,6 @@ description: "Using local storage for Kubernetes workloads." Using local storage for Kubernetes workloads implies that the pod will be bound to the node where the local storage is available. Local storage is not replicated, so in case of a machine failure contents of the local storage will be lost. -> Note: when using `EPHEMERAL` Talos partition (`/var`), make sure to use `--preserve` set while performing upgrades, otherwise you risk losing data. - ## `hostPath` mounts The simplest way to use local storage is to use `hostPath` mounts. diff --git a/website/content/v1.8/kubernetes-guides/configuration/replicated-local-storage-with-openebs.md b/website/content/v1.8/kubernetes-guides/configuration/replicated-local-storage-with-openebs.md index cca1d9fd26c..4fce4c293e7 100644 --- a/website/content/v1.8/kubernetes-guides/configuration/replicated-local-storage-with-openebs.md +++ b/website/content/v1.8/kubernetes-guides/configuration/replicated-local-storage-with-openebs.md @@ -13,9 +13,6 @@ The documentation will follow installing OpenEBS via the offical Helm chart. Since Talos is different from standard Operating Systems, the OpenEBS components need a little tweaking after the Helm installation. Refer to the OpenEBS [documentation](https://openebs.io/docs/quickstart-guide/installation) if you need further customization. -> NB: Also note that the Talos nodes need to be upgraded with `--preserve` set while running OpenEBS, otherwise you risk losing data. -> Even though it's possible to recover data from other replicas if the node is wiped during an upgrade, this can require extra operational knowledge to recover, so it's highly recommended to use `--preserve` to avoid data loss. - ## Preparing the nodes Depending on the version of OpenEBS, there is a `hostPath` mount with the path `/var/openebs/local` or `/var/local/openebs`. diff --git a/website/content/v1.8/learn-more/architecture.md b/website/content/v1.8/learn-more/architecture.md index f03cf76d5e9..4b0c38dfbb4 100644 --- a/website/content/v1.8/learn-more/architecture.md +++ b/website/content/v1.8/learn-more/architecture.md @@ -50,7 +50,6 @@ Directories like this are `overlayfs` backed by an XFS file system mounted at `/ The `/var` directory is owned by Kubernetes with the exception of the above `overlayfs` file systems. This directory is writable and used by `etcd` (in the case of control plane nodes), the kubelet, and the CRI (containerd). -Its content survives machine reboots, but it is wiped and lost on machine upgrades and resets, unless the -`--preserve` option of [`talosctl upgrade`]({{< relref "../reference/cli#talosctl-upgrade" >}}) or the +Its content survives machine reboots and on machine upgrades, but it is wiped and lost on resets, unless the `--system-labels-to-wipe` option of [`talosctl reset`]({{< relref "../reference/cli#talosctl-reset" >}}) is used. diff --git a/website/content/v1.8/talos-guides/upgrading-talos.md b/website/content/v1.8/talos-guides/upgrading-talos.md index 15f6f9acac6..7b88319c0c6 100644 --- a/website/content/v1.8/talos-guides/upgrading-talos.md +++ b/website/content/v1.8/talos-guides/upgrading-talos.md @@ -65,10 +65,6 @@ as: --image ghcr.io/siderolabs/installer:{{< release >}} ``` -There is an option to this command: `--preserve`, which will explicitly tell Talos to keep ephemeral data intact. -In most cases, it is correct to let Talos perform its default action of erasing the ephemeral data. -However, for a single-node control-plane, make sure that `--preserve=true`. - Rarely, an upgrade command will fail due to a process holding a file open on disk. In these cases, you can use the `--stage` flag. This puts the upgrade artifacts on disk, and adds some metadata to a disk partition that gets checked very early in the boot process, then reboots the node. diff --git a/website/content/v1.9/advanced/proprietary-kernel-modules.md b/website/content/v1.9/advanced/proprietary-kernel-modules.md index 21e3dc308c5..9a42ef4d7f2 100644 --- a/website/content/v1.9/advanced/proprietary-kernel-modules.md +++ b/website/content/v1.9/advanced/proprietary-kernel-modules.md @@ -61,5 +61,5 @@ aliases: 3. Deploying to your cluster ```bash - talosctl upgrade --image ghcr.io/your-username/talos-installer: --preserve=true + talosctl upgrade --image ghcr.io/your-username/talos-installer: ``` diff --git a/website/content/v1.9/kubernetes-guides/configuration/ceph-with-rook.md b/website/content/v1.9/kubernetes-guides/configuration/ceph-with-rook.md index 17f62bb824f..1ab9c55b57f 100644 --- a/website/content/v1.9/kubernetes-guides/configuration/ceph-with-rook.md +++ b/website/content/v1.9/kubernetes-guides/configuration/ceph-with-rook.md @@ -103,7 +103,6 @@ ceph-filesystem rook-ceph.cephfs.csi.ceph.com Delete Immediate ## Talos Linux Considerations It is important to note that a Rook Ceph cluster saves cluster information directly onto the node (by default `dataDirHostPath` is set to `/var/lib/rook`). -If running only a single `mon` instance, cluster management is little bit more involved, as any time a Talos Linux node is reconfigured or upgraded, the partition that stores the `/var` [file system]({{< relref "../../learn-more/architecture#the-file-system" >}}) is wiped, but the `--preserve` option of [`talosctl upgrade`]({{< relref "../../reference/cli#talosctl-upgrade" >}}) will ensure that doesn't happen. By default, Rook configues Ceph to have 3 `mon` instances, in which case the data stored in `dataDirHostPath` can be regenerated from the other `mon` instances. So when performing maintenance on a Talos Linux node with a Rook Ceph cluster (e.g. upgrading the Talos Linux version), it is imperative that care be taken to maintain the health of the Ceph cluster. diff --git a/website/content/v1.9/kubernetes-guides/configuration/local-storage.md b/website/content/v1.9/kubernetes-guides/configuration/local-storage.md index 738fd973ba5..ef3e9529e05 100644 --- a/website/content/v1.9/kubernetes-guides/configuration/local-storage.md +++ b/website/content/v1.9/kubernetes-guides/configuration/local-storage.md @@ -6,8 +6,6 @@ description: "Using local storage for Kubernetes workloads." Using local storage for Kubernetes workloads implies that the pod will be bound to the node where the local storage is available. Local storage is not replicated, so in case of a machine failure contents of the local storage will be lost. -> Note: when using `EPHEMERAL` Talos partition (`/var`), make sure to use `--preserve` set while performing upgrades, otherwise you risk losing data. - ## `hostPath` mounts The simplest way to use local storage is to use `hostPath` mounts. diff --git a/website/content/v1.9/learn-more/architecture.md b/website/content/v1.9/learn-more/architecture.md index f03cf76d5e9..4b0c38dfbb4 100644 --- a/website/content/v1.9/learn-more/architecture.md +++ b/website/content/v1.9/learn-more/architecture.md @@ -50,7 +50,6 @@ Directories like this are `overlayfs` backed by an XFS file system mounted at `/ The `/var` directory is owned by Kubernetes with the exception of the above `overlayfs` file systems. This directory is writable and used by `etcd` (in the case of control plane nodes), the kubelet, and the CRI (containerd). -Its content survives machine reboots, but it is wiped and lost on machine upgrades and resets, unless the -`--preserve` option of [`talosctl upgrade`]({{< relref "../reference/cli#talosctl-upgrade" >}}) or the +Its content survives machine reboots and on machine upgrades, but it is wiped and lost on resets, unless the `--system-labels-to-wipe` option of [`talosctl reset`]({{< relref "../reference/cli#talosctl-reset" >}}) is used. diff --git a/website/content/v1.9/talos-guides/upgrading-talos.md b/website/content/v1.9/talos-guides/upgrading-talos.md index 8122d368958..03a6adf9a69 100644 --- a/website/content/v1.9/talos-guides/upgrading-talos.md +++ b/website/content/v1.9/talos-guides/upgrading-talos.md @@ -62,10 +62,6 @@ as: --image ghcr.io/siderolabs/installer:{{< release >}} ``` -There is an option to this command: `--preserve`, which will explicitly tell Talos to keep ephemeral data intact. -In most cases, it is correct to let Talos perform its default action of erasing the ephemeral data. -However, for a single-node control-plane, make sure that `--preserve=true`. - Rarely, an upgrade command will fail due to a process holding a file open on disk. In these cases, you can use the `--stage` flag. This puts the upgrade artifacts on disk, and adds some metadata to a disk partition that gets checked very early in the boot process, then reboots the node.