diff --git a/docs/usage/install/underlay/get-started-ovs-zh_CN.md b/docs/usage/install/underlay/get-started-ovs-zh_CN.md index 64f56b4508..30f284aa45 100644 --- a/docs/usage/install/underlay/get-started-ovs-zh_CN.md +++ b/docs/usage/install/underlay/get-started-ovs-zh_CN.md @@ -27,7 +27,7 @@ Spiderpool 可用作 Underlay 网络场景下提供固定 IP 的一种解决方 * 如果你使用 Underlay 模式,`coordinator` 会在主机上创建 veth 接口,为了防止 NetworkManager 干扰 veth 接口, 导致 Pod 访问异常。我们需要配置 NetworkManager,使其不纳管这些 Veth 接口。 - * 如果你通过 `Iface`r 创建 Vlan 和 Bond 接口,NetworkManager 可能会干扰这些接口,导致 Pod 访问异常。我们需要配置 NetworkManager,使其不纳管这些 Veth 接口。 + * 如果你通过 `Ifacer` 创建 Vlan 和 Bond 接口,NetworkManager 可能会干扰这些接口,导致 Pod 访问异常。我们需要配置 NetworkManager,使其不纳管这些 Veth 接口。 ```shell ~# IFACER_INTERFACE="" @@ -86,7 +86,7 @@ Spiderpool 可用作 Underlay 网络场景下提供固定 IP 的一种解决方 serviceCIDR: - 10.233.0.0/18 ``` - + 如果状态为 `NotReady`,这将会阻止 Pod 被创建。目前 Spiderpool: * 优先通过查询 `kube-system/kubeadm-config` ConfigMap 获取集群的 Pod 和 Service 子网。 * 如果 `kubeadm-config` 不存在导致无法获取集群子网,那么 Spiderpool 会从 `Kube-controller-manager Pod` 中获取集群 Pod 和 Service 的子网。 如果您集群的 Kube-controller-manager 组件以 `systemd` 等方式而不是以静态 Pod 运行。那么 Spiderpool 仍然无法获取集群的子网信息。 @@ -119,12 +119,22 @@ Spiderpool 可用作 Underlay 网络场景下提供固定 IP 的一种解决方 ```bash ~# ovs-vsctl add-br br1 ~# ovs-vsctl add-port br1 eth0 - ~# ip addr add /<子网掩码> dev br1 ~# ip link set br1 up - ~# ip route add default via <默认网关IP> dev br1 ``` - 请把以上命令配置在系统行动脚本中,以在主机重启时能够生效 + 请把以上命令配置在系统行动脚本中,以在主机重启时能够生效,如下是 ubuntu 中使用 netplan 的示例,对于其他操作系统您可以考虑使用 nmcli,在 /etc/netplan 目录下创建 12-br1.yaml 后,通过 netplan apply 生效。 + + 为确保 br1 可用,请检查 eth0 网卡是否也被 netplan 纳管,在重启等场景下 eth0 状态正常。 + + ```yaml: 12-br1.yaml + network: + version: 2 + renderer: networkd + ethernets: + br1: + addresses: + - "/<子网掩码>" # 172.10.10.220/16 + ``` 创建后,可以在每个节点上查看到如下的网桥信息: diff --git a/docs/usage/install/underlay/get-started-ovs.md b/docs/usage/install/underlay/get-started-ovs.md index d41ff2d8dc..43abac11e8 100644 --- a/docs/usage/install/underlay/get-started-ovs.md +++ b/docs/usage/install/underlay/get-started-ovs.md @@ -88,7 +88,7 @@ Spiderpool can be used as a solution to provide fixed IPs in an Underlay network * Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config ConfigMap. * If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager Pod. - + If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information. If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information: @@ -117,12 +117,22 @@ Spiderpool can be used as a solution to provide fixed IPs in an Underlay network ```bash ~# ovs-vsctl add-br br1 ~# ovs-vsctl add-port br1 eth0 - ~# ip addr add / dev br1 ~# ip link set br1 up - ~# ip route add default via dev br1 ``` - Pleade include these commands in your system startup script to ensure they take effect after host restarts. + Pleade include these commands in your system startup script to ensure they take effect after host restarts.The following is an example of using netplan in ubuntu. For other operating systems, you can consider using nmcli and create 12-br1 in the /etc/netplan directory. yaml, it will take effect through netplan apply. + + To ensure that br1 is available, please check whether the eth0 NIC is also managed by netplan. The status of eth0 is normal in scenarios such as restarting. + + ```yaml: 12-br1.yaml + network: + version: 2 + renderer: networkd + ethernets: + br1: + addresses: + - "/" # 172.10.10.220/16 + ``` After creating the bridge, you will be able to view its information on each node: