From af79567dd5771d67347c64f414ff94119f9d0456 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Matthias=20B=C3=BCchse?= Date: Wed, 20 Nov 2024 22:22:37 +0100 Subject: [PATCH] Revert any non-editorial changes to scs-0214-v1 that happened after stabilization MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Matthias Büchse --- .../scs-0214-v1-k8s-node-distribution.md | 36 ------------------- 1 file changed, 36 deletions(-) diff --git a/Standards/scs-0214-v1-k8s-node-distribution.md b/Standards/scs-0214-v1-k8s-node-distribution.md index ffec30efc..ce70e605e 100644 --- a/Standards/scs-0214-v1-k8s-node-distribution.md +++ b/Standards/scs-0214-v1-k8s-node-distribution.md @@ -80,42 +80,6 @@ If the standard is used by a provider, the following decisions are binding and v can also be scaled vertically first before scaling horizontally. - Worker node distribution MUST be indicated to the user through some kind of labeling in order to enable (anti)-affinity for workloads over "failure zones". -- To provide metadata about the node distribution, which also enables testing of this standard, - providers MUST label their K8s nodes with the labels listed below. - - `topology.kubernetes.io/zone` - - Corresponds with the label described in [K8s labels documentation][k8s-labels-docs]. - It provides a logical zone of failure on the side of the provider, e.g. a server rack - in the same electrical circuit or multiple machines bound to the internet through a - singular network structure. How this is defined exactly is up to the plans of the provider. - The field gets autopopulated most of the time by either the kubelet or external mechanisms - like the cloud controller. - - - `topology.kubernetes.io/region` - - Corresponds with the label described in [K8s labels documentation][k8s-labels-docs]. - It describes the combination of one or more failure zones into a region or domain, therefore - showing a larger entity of logical failure zone. An example for this could be a building - containing racks that are put into such a zone, since they're all prone to failure, if e.g. - the power for the building is cut. How this is defined exactly is also up to the provider. - The field gets autopopulated most of the time by either the kubelet or external mechanisms - like the cloud controller. - - - `topology.scs.community/host-id` - - This is an SCS-specific label; it MUST contain the hostID of the physical machine running - the hypervisor (NOT: the hostID of a virtual machine). Here, the hostID is an arbitrary identifier, - which need not contain the actual hostname, but it should nonetheless be unique to the host. - This helps identify the distribution over underlying physical machines, - which would be masked if VM hostIDs were used. - -## Conformance Tests - -The script `k8s-node-distribution-check.py` checks the nodes available with a user-provided -kubeconfig file. It then determines based on the labels `kubernetes.io/hostname`, `topology.kubernetes.io/zone`, -`topology.kubernetes.io/region` and `node-role.kubernetes.io/control-plane`, if a distribution -of the available nodes is present. If this isn't the case, the script produces an error. -If also produces warnings and informational outputs, if e.g. labels don't seem to be set. [k8s-ha]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/ [k8s-large-clusters]: https://kubernetes.io/docs/setup/best-practices/cluster-large/