Skip to content

Commit

Permalink
docs: remove the mention of preserve flag for Talos 1.8+
Browse files Browse the repository at this point in the history
This flag no longer exists in Talos 1.8 and higher.

Fixes #10172

Signed-off-by: Dmitriy Matrenichev <[email protected]>
  • Loading branch information
DmitriyMV committed Jan 20, 2025
1 parent bde516f commit 99ba539
Show file tree
Hide file tree
Showing 11 changed files with 4 additions and 23 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -61,5 +61,5 @@ aliases:
3. Deploying to your cluster

```bash
talosctl upgrade --image ghcr.io/your-username/talos-installer:<talos version> --preserve=true
talosctl upgrade --image ghcr.io/your-username/talos-installer:<talos version>
```
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,6 @@ ceph-filesystem rook-ceph.cephfs.csi.ceph.com Delete Immediate
## Talos Linux Considerations

It is important to note that a Rook Ceph cluster saves cluster information directly onto the node (by default `dataDirHostPath` is set to `/var/lib/rook`).
If running only a single `mon` instance, cluster management is little bit more involved, as any time a Talos Linux node is reconfigured or upgraded, the partition that stores the `/var` [file system]({{< relref "../../learn-more/architecture#the-file-system" >}}) is wiped, but the `--preserve` option of [`talosctl upgrade`]({{< relref "../../reference/cli#talosctl-upgrade" >}}) will ensure that doesn't happen.

By default, Rook configues Ceph to have 3 `mon` instances, in which case the data stored in `dataDirHostPath` can be regenerated from the other `mon` instances.
So when performing maintenance on a Talos Linux node with a Rook Ceph cluster (e.g. upgrading the Talos Linux version), it is imperative that care be taken to maintain the health of the Ceph cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ description: "Using local storage for Kubernetes workloads."
Using local storage for Kubernetes workloads implies that the pod will be bound to the node where the local storage is available.
Local storage is not replicated, so in case of a machine failure contents of the local storage will be lost.

> Note: when using `EPHEMERAL` Talos partition (`/var`), make sure to use `--preserve` set while performing upgrades, otherwise you risk losing data.
## `hostPath` mounts

The simplest way to use local storage is to use `hostPath` mounts.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@ The documentation will follow installing OpenEBS via the offical Helm chart.
Since Talos is different from standard Operating Systems, the OpenEBS components need a little tweaking after the Helm installation.
Refer to the OpenEBS [documentation](https://openebs.io/docs/quickstart-guide/installation) if you need further customization.

> NB: Also note that the Talos nodes need to be upgraded with `--preserve` set while running OpenEBS, otherwise you risk losing data.
> Even though it's possible to recover data from other replicas if the node is wiped during an upgrade, this can require extra operational knowledge to recover, so it's highly recommended to use `--preserve` to avoid data loss.
## Preparing the nodes

Depending on the version of OpenEBS, there is a `hostPath` mount with the path `/var/openebs/local` or `/var/local/openebs`.
Expand Down
3 changes: 1 addition & 2 deletions website/content/v1.8/learn-more/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ Directories like this are `overlayfs` backed by an XFS file system mounted at `/

The `/var` directory is owned by Kubernetes with the exception of the above `overlayfs` file systems.
This directory is writable and used by `etcd` (in the case of control plane nodes), the kubelet, and the CRI (containerd).
Its content survives machine reboots, but it is wiped and lost on machine upgrades and resets, unless the
`--preserve` option of [`talosctl upgrade`]({{< relref "../reference/cli#talosctl-upgrade" >}}) or the
Its content survives machine reboots and on machine upgrades, but it is wiped and lost on resets, unless the
`--system-labels-to-wipe` option of [`talosctl reset`]({{< relref "../reference/cli#talosctl-reset" >}})
is used.
4 changes: 0 additions & 4 deletions website/content/v1.8/talos-guides/upgrading-talos.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,6 @@ as:
--image ghcr.io/siderolabs/installer:{{< release >}}
```

There is an option to this command: `--preserve`, which will explicitly tell Talos to keep ephemeral data intact.
In most cases, it is correct to let Talos perform its default action of erasing the ephemeral data.
However, for a single-node control-plane, make sure that `--preserve=true`.

Rarely, an upgrade command will fail due to a process holding a file open on disk.
In these cases, you can use the `--stage` flag.
This puts the upgrade artifacts on disk, and adds some metadata to a disk partition that gets checked very early in the boot process, then reboots the node.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,5 +61,5 @@ aliases:
3. Deploying to your cluster

```bash
talosctl upgrade --image ghcr.io/your-username/talos-installer:<talos version> --preserve=true
talosctl upgrade --image ghcr.io/your-username/talos-installer:<talos version>
```
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,6 @@ ceph-filesystem rook-ceph.cephfs.csi.ceph.com Delete Immediate
## Talos Linux Considerations

It is important to note that a Rook Ceph cluster saves cluster information directly onto the node (by default `dataDirHostPath` is set to `/var/lib/rook`).
If running only a single `mon` instance, cluster management is little bit more involved, as any time a Talos Linux node is reconfigured or upgraded, the partition that stores the `/var` [file system]({{< relref "../../learn-more/architecture#the-file-system" >}}) is wiped, but the `--preserve` option of [`talosctl upgrade`]({{< relref "../../reference/cli#talosctl-upgrade" >}}) will ensure that doesn't happen.

By default, Rook configues Ceph to have 3 `mon` instances, in which case the data stored in `dataDirHostPath` can be regenerated from the other `mon` instances.
So when performing maintenance on a Talos Linux node with a Rook Ceph cluster (e.g. upgrading the Talos Linux version), it is imperative that care be taken to maintain the health of the Ceph cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ description: "Using local storage for Kubernetes workloads."
Using local storage for Kubernetes workloads implies that the pod will be bound to the node where the local storage is available.
Local storage is not replicated, so in case of a machine failure contents of the local storage will be lost.

> Note: when using `EPHEMERAL` Talos partition (`/var`), make sure to use `--preserve` set while performing upgrades, otherwise you risk losing data.
## `hostPath` mounts

The simplest way to use local storage is to use `hostPath` mounts.
Expand Down
3 changes: 1 addition & 2 deletions website/content/v1.9/learn-more/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ Directories like this are `overlayfs` backed by an XFS file system mounted at `/

The `/var` directory is owned by Kubernetes with the exception of the above `overlayfs` file systems.
This directory is writable and used by `etcd` (in the case of control plane nodes), the kubelet, and the CRI (containerd).
Its content survives machine reboots, but it is wiped and lost on machine upgrades and resets, unless the
`--preserve` option of [`talosctl upgrade`]({{< relref "../reference/cli#talosctl-upgrade" >}}) or the
Its content survives machine reboots and on machine upgrades, but it is wiped and lost on resets, unless the
`--system-labels-to-wipe` option of [`talosctl reset`]({{< relref "../reference/cli#talosctl-reset" >}})
is used.
4 changes: 0 additions & 4 deletions website/content/v1.9/talos-guides/upgrading-talos.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,6 @@ as:
--image ghcr.io/siderolabs/installer:{{< release >}}
```

There is an option to this command: `--preserve`, which will explicitly tell Talos to keep ephemeral data intact.
In most cases, it is correct to let Talos perform its default action of erasing the ephemeral data.
However, for a single-node control-plane, make sure that `--preserve=true`.

Rarely, an upgrade command will fail due to a process holding a file open on disk.
In these cases, you can use the `--stage` flag.
This puts the upgrade artifacts on disk, and adds some metadata to a disk partition that gets checked very early in the boot process, then reboots the node.
Expand Down

0 comments on commit 99ba539

Please sign in to comment.