From 1101abb91ef943d53a8e95055bfc075b2e6e6e42 Mon Sep 17 00:00:00 2001 From: Erik Sundell Date: Tue, 7 May 2024 13:56:17 +0200 Subject: [PATCH] docs: spelling and grammar fixes Co-authored-by: Sarah Gibson <44771837+sgibson91@users.noreply.github.com> --- docs/howto/upgrade-cluster/index.md | 4 ++-- docs/howto/upgrade-cluster/upgrade-disruptions.md | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/howto/upgrade-cluster/index.md b/docs/howto/upgrade-cluster/index.md index 79c36e0011..bc05598406 100644 --- a/docs/howto/upgrade-cluster/index.md +++ b/docs/howto/upgrade-cluster/index.md @@ -18,8 +18,8 @@ procedures to community champions and users. Up until now we have made some k8s cluster upgrades opportunistically, especially for clusters that has showed little to no activity during some -periods. Other cluster upgrades has been scheduled with community champions, and -some in shared clusters has been announced ahead of time. +periods. Other cluster upgrades have been scheduled with community champions, and +some in shared clusters have been announced ahead of time. ``` (upgrade-cluster:ambition)= diff --git a/docs/howto/upgrade-cluster/upgrade-disruptions.md b/docs/howto/upgrade-cluster/upgrade-disruptions.md index 8f30967fa0..75a6a46fbf 100644 --- a/docs/howto/upgrade-cluster/upgrade-disruptions.md +++ b/docs/howto/upgrade-cluster/upgrade-disruptions.md @@ -14,7 +14,7 @@ the cost savings are minimal we only create for regional clusters now. If upgrading a zonal cluster, the single k8s api-server will be temporarily unavailable, but that is not a big problem as user servers and JupyterHub will -remains accessible. The brief disruption is that JupyterHub won't be able to +remain accessible. The brief disruption is that JupyterHub won't be able to start new user servers, and user servers won't be able to create or scale their dask-clusters. @@ -22,8 +22,8 @@ dask-clusters. When upgrading a cloud provider managed k8s cluster, it may upgrade some managed workload part of the k8s cluster, such as calico that enforces NetworkPolicy -rules. Maybe this could cause a disruption for users, but its not currently know -to do so. +rules. Maybe this could cause a disruption for users, but its not currently known +if it does and in what manner. ## Core node pool disruptions @@ -38,21 +38,21 @@ pods that are proxying network traffic associated with incoming connections. To shut down such pod means to break connections from users working against the user servers. A broken connection can be re-established if there is another -replica of this pod is ready to accept a new connection. +replica of this pod ready to accept a new connection. We are currently running only one replica of the `ingress-nginx-controller` pod, and we won't have issues with this during rolling updates, such as when the Deployment's pod template specification is changed or when manually running `kubectl rollout restart -n support deploy/support-ingress-nginx-controller`. We -will however have broken connections and user pods unable to establish new +will however have broken connections and user pods unable to establish new connections directly if `kubectl delete` is used on this single pod, or `kubectl drain` is used on the node. ### hub pod disruptions -Our JupyterHub installations each has a single `hub` pod, and having more isn't +Our JupyterHub installations each have a single `hub` pod, and having more isn't supported by JupyterHub itself. Due to this, and because it has a disk mounted -to it that can only be mounted at the same time to one pod, it isn't configured +to it that can only be mounted to one pod at a time, it isn't configured to do rolling updates. When the `hub` pod isn't running, users can't visit `/hub` paths, but they can