diff --git a/docs/usage/install/underlay/get-started-ovs-zh_CN.md b/docs/usage/install/underlay/get-started-ovs-zh_CN.md index 0a2baea24b..81bfd5e335 100644 --- a/docs/usage/install/underlay/get-started-ovs-zh_CN.md +++ b/docs/usage/install/underlay/get-started-ovs-zh_CN.md @@ -25,7 +25,7 @@ Spiderpool 可用作 Underlay 网络场景下提供固定 IP 的一种解决方 * 如果你使用 Underlay 模式,`coordinator` 会在主机上创建 veth 接口,为了防止 NetworkManager 干扰 veth 接口, 导致 Pod 访问异常。我们需要配置 NetworkManager,使其不纳管这些 Veth 接口。 - * 如果你通过 `Iface`r 创建 Vlan 和 Bond 接口,NetworkManager 可能会干扰这些接口,导致 Pod 访问异常。我们需要配置 NetworkManager,使其不纳管这些 Veth 接口。 + * 如果你通过 `Ifacer` 创建 Vlan 和 Bond 接口,NetworkManager 可能会干扰这些接口,导致 Pod 访问异常。我们需要配置 NetworkManager,使其不纳管这些 Veth 接口。 ```shell ~# IFACER_INTERFACE="" @@ -36,38 +36,35 @@ Spiderpool 可用作 Underlay 网络场景下提供固定 IP 的一种解决方 ~# systemctl restart NetworkManager ``` +## 节点上配置 Open vSwitch 网桥 -## 安装 Spiderpool +如下是创建并配置持久 OVS Bridge 的示例,本文中以 `eth0` 网卡为例,需要在每个节点上执行。 -1. 安装 Spiderpool。 +### Ubuntu 系统使用 netplan 持久化 OVS Bridge - ```bash - helm repo add spiderpool https://spidernet-io.github.io/spiderpool - helm repo update spiderpool - helm install spiderpool spiderpool/spiderpool --namespace kube-system --set multus.multusCNI.defaultCniCRName="ovs-conf" --set plugins.installOvsCNI=true - ``` +如果您使用的是 Ubuntu 系统,可以参考本章节通过 netplan 配置 OVS Bridge。 - > 如果未安装 ovs-cni, 可以通过 Helm 参数 '-set plugins.installOvsCNI=true' 安装它。 - > - > 如果您是国内用户,可以指定参数 `--set global.imageRegistryOverride=ghcr.m.daocloud.io` 以帮助您快速的拉取镜像。 - > - > 通过 `multus.multusCNI.defaultCniCRName` 指定 multus 默认使用的 CNI 的 NetworkAttachmentDefinition 实例名。如果 `multus.multusCNI.defaultCniCRName` 选项不为空,则安装后会自动生成一个数据为空的 NetworkAttachmentDefinition 对应实例。如果 `multus.multusCNI.defaultCniCRName` 选项为空,会尝试通过 /etc/cni/net.d 目录下的第一个 CNI 配置来创建对应的 NetworkAttachmentDefinition 实例,否则会自动生成一个名为 `default` 的 NetworkAttachmentDefinition 实例,以完成 multus 的安装。 - -2. 在每个节点上配置 Open vSwitch 网桥。 - - 创建网桥并配置网桥,以 `eth0` 为例。 +1. 创建 OVS Bridge ```bash ~# ovs-vsctl add-br br1 ~# ovs-vsctl add-port br1 eth0 - ~# ip addr add /<子网掩码> dev br1 ~# ip link set br1 up - ~# ip route add default via <默认网关IP> dev br1 ``` - 请把以上命令配置在系统行动脚本中,以在主机重启时能够生效 +2. 在 /etc/netplan 目录下创建 12-br1.yaml 后,通过 `netplan apply` 生效。为确保在重启主机等场景下 br1 仍然可用,请检查 eth0 网卡是否也被 netplan 纳管。 + + ```yaml: 12-br1.yaml + network: + version: 2 + renderer: networkd + ethernets: + br1: + addresses: + - "/<子网掩码>" # 172.18.10.10/16 + ``` - 创建后,可以在每个节点上查看到如下的网桥信息: +3. 创建后,可以在每个节点上查看到如下的网桥信息: ```bash ~# ovs-vsctl show @@ -81,8 +78,162 @@ Spiderpool 可用作 Underlay 网络场景下提供固定 IP 的一种解决方 Port veth97fb4795 Interface veth97fb4795 ovs_version: "2.17.3" + + ~# ip a show br1 + 208: br1: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 + link/ether 00:50:56:b4:5f:fd brd ff:ff:ff:ff:ff:ff + inet 172.18.10.10/16 brd 172.18.255.255 scope global noprefixroute br1 + valid_lft forever preferred_lft forever + inet6 fe80::4f28:8ef1:6b82:a9e4/64 scope link noprefixroute + valid_lft forever preferred_lft forever + ``` + +### Fedora、Centos 等使用 NetworkManager 持久化 OVS Bridge + +如果您使用如 Fedora、Centos 等 OS,推荐使用 NetworkManager 持久化 OVS Bridge。通过 NetworkManager 持久化 OVS Bridge 是一种不局限操作系统,更通用的一种方式。 + +1. 使用 NetworkManager 持久化 OVS Bridge,你需要安装 OVS NetworkManager 插件,示例如下: + + ```bash + ~# sudo dnf install -y NetworkManager-ovs + ~# sudo systemctl restart NetworkManager + ``` + +2. 创建 ovs 网桥、端口和接口。 + + ```bash + ~# sudo nmcli con add type ovs-bridge conn.interface br1 con-name br1 + ~# sudo nmcli con add type ovs-port conn.interface br1-port master br1 con-name br1-port + ~# sudo nmcli con add type ovs-interface slave-type ovs-port conn.interface br1 master br1-port con-name br1-int + ``` + +3. 在网桥上创建另一个端口,并选择我们的物理设备中的 eth0 网卡作为其以太网接口,以便真正的流量可以在网络上流转。 + + ```bash + ~# sudo nmcli con add type ovs-port conn.interface ovs-port-eth0 master br1 con-name ovs-port-eth0 + ~# sudo nmcli con add type ethernet conn.interface eth0 master ovs-port-eth0 con-name ovs-port-eth0-int + ``` + +4. 配置与激活 ovs 网桥。 + + 通过设置静态 IP 的方式配置网桥 + + ```bash + ~# sudo nmcli con modify br1-int ipv4.method static ipv4.address "/<子网掩码>" # 172.18.10.10/16 + ``` + + 激活网桥。 + + ```bash + ~# sudo nmcli con down "eth0" + ~# sudo nmcli con up ovs-port-eth0-int + ~# sudo nmcli con up br1-int + ``` + +5. 创建后,可以在每个节点上查看到类似如下的信息。 + + ```bash + ~# nmcli c + br1-int dbb1c9be-e1ab-4659-8d4b-564e3f8858fa ovs-interface br1 + br1 a85626c1-2392-443b-a767-f86a57a1cff5 ovs-bridge br1 + br1-port fe30170f-32d2-489e-9ca3-62c1f5371c6c ovs-port br1-port + ovs-port-eth0 a43771a9-d840-4d2d-b1c3-c501a6da80ed ovs-port ovs-port-eth0 + ovs-port-eth0-int 1334f49b-dae4-4225-830b-4d101ab6fad6 ethernet eth0 + + ~# ovs-vsctl show + 203dd6d0-45f4-4137-955e-c4c36b9709e6 + Bridge br1 + Port ovs-port-eth0 + Interface eth0 + type: system + Port br1-port + Interface br1 + type: internal + ovs_version: "3.2.1" + + ~# ip a show br1 + 208: br1: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 + link/ether 00:50:56:b4:5f:fd brd ff:ff:ff:ff:ff:ff + inet 172.18.10.10/16 brd 172.18.255.255 scope global noprefixroute br1 + valid_lft forever preferred_lft forever + inet6 fe80::4f28:8ef1:6b82:a9e4/64 scope link noprefixroute + valid_lft forever preferred_lft forever ``` +## 安装 Spiderpool + +1. 安装 Spiderpool。 + + ```bash + helm repo add spiderpool https://spidernet-io.github.io/spiderpool + helm repo update spiderpool + helm install spiderpool spiderpool/spiderpool --namespace kube-system --set multus.multusCNI.defaultCniCRName="ovs-conf" --set plugins.installOvsCNI=true + ``` + + > 如果未安装 ovs-cni, 可以通过 Helm 参数 '-set plugins.installOvsCNI=true' 安装它。 + > + > 如果您是国内用户,可以指定参数 `--set global.imageRegistryOverride=ghcr.m.daocloud.io` 以帮助您快速的拉取镜像。 + > + > 通过 `multus.multusCNI.defaultCniCRName` 指定 multus 默认使用的 CNI 的 NetworkAttachmentDefinition 实例名。如果 `multus.multusCNI.defaultCniCRName` 选项不为空,则安装后会自动生成一个数据为空的 NetworkAttachmentDefinition 对应实例。如果 `multus.multusCNI.defaultCniCRName` 选项为空,会尝试通过 /etc/cni/net.d 目录下的第一个 CNI 配置来创建对应的 NetworkAttachmentDefinition 实例,否则会自动生成一个名为 `default` 的 NetworkAttachmentDefinition 实例,以完成 multus 的安装。 + +2. 检查 `Spidercoordinator.status` 中的 Phase 是否为 Synced: + + ```shell + ~# kubectl get spidercoordinators.spiderpool.spidernet.io default -o yaml + apiVersion: spiderpool.spidernet.io/v2beta1 + kind: SpiderCoordinator + metadata: + creationTimestamp: "2023-10-18T08:31:09Z" + finalizers: + - spiderpool.spidernet.io + generation: 7 + name: default + resourceVersion: "195405" + uid: 8bdceced-15db-497b-be07-81cbcba7caac + spec: + detectGateway: false + detectIPConflict: false + hijackCIDR: + - 169.254.0.0/16 + hostRPFilter: 0 + hostRuleTable: 500 + mode: auto + podCIDRType: calico + podDefaultRouteNIC: "" + podMACPrefix: "" + tunePodRoutes: true + status: + overlayPodCIDR:[] + phase: Synced + serviceCIDR: + - 10.233.0.0/18 + ``` + + 如果状态为 `NotReady`,这将会阻止 Pod 被创建。目前 Spiderpool: + * 优先通过查询 `kube-system/kubeadm-config` ConfigMap 获取集群的 Pod 和 Service 子网。 + * 如果 `kubeadm-config` 不存在导致无法获取集群子网,那么 Spiderpool 会从 `Kube-controller-manager Pod` 中获取集群 Pod 和 Service 的子网。 如果您集群的 Kube-controller-manager 组件以 `systemd` 等方式而不是以静态 Pod 运行。那么 Spiderpool 仍然无法获取集群的子网信息。 + + 如果上面两种方式都失败,Spiderpool 会同步 status.phase 为 NotReady, 这将会阻止 Pod 被创建。我们可以手动创建 kubeadm-config ConfigMap,并正确配置集群的子网信息: + + ```shell + export POD_SUBNET= + export SERVICE_SUBNET= + cat << EOF | kubectl apply -f - + apiVersion: v1 + kind: ConfigMap + metadata: + name: kubeadm-config + namespace: kube-system + data: + ClusterConfiguration: | + networking: + podSubnet: ${POD_SUBNET} + serviceSubnet: ${SERVICE_SUBNET} + EOF + ``` + + 一旦创建完成,Spiderpool 将会自动同步其状态。 + 3. 创建 SpiderIPPool 实例。 Pod 会从该 IP 池中获取 IP,进行 Underlay 的网络通讯,所以该 IP 池的子网需要与接入的 Underlay 子网对应。以下是创建相关的 SpiderIPPool 示例: diff --git a/docs/usage/install/underlay/get-started-ovs.md b/docs/usage/install/underlay/get-started-ovs.md index 6faca9235b..ae51090c60 100644 --- a/docs/usage/install/underlay/get-started-ovs.md +++ b/docs/usage/install/underlay/get-started-ovs.md @@ -36,37 +36,35 @@ Spiderpool can be used as a solution to provide fixed IPs in an Underlay network ~# systemctl restart NetworkManager ``` -## Install Spiderpool +## Configure Open vSwitch bridge on the node -1. Install Spiderpool. +The following is an example of creating and configuring a persistent OVS Bridge. This article takes the `eth0` network card as an example and needs to be executed on each node. - ```bash - helm repo add spiderpool https://spidernet-io.github.io/spiderpool - helm repo update spiderpool - helm install spiderpool spiderpool/spiderpool --namespace kube-system --set multus.multusCNI.defaultCniCRName="ovs-conf" --set plugins.installOvsCNI=true - ``` +### Ubuntu system uses netplan persistence OVS Bridge - > If ovs-cni is not installed, you can install it by specifying the Helm parameter `--set plugins.installOvsCNI=true`. - > - > If you are mainland user who is not available to access ghcr.io,You can specify the parameter `-set global.imageRegistryOverride=ghcr.m.daocloud.io` to avoid image pulling failures for Spiderpool. - > - > Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. - -2. To configure Open vSwitch bridges on each node: +If you are using an Ubuntu system, you can refer to this chapter to configure OVS Bridge through `netplan`. - Create a bridge and configure it using `eth0`` as an example. +1. Create OVS Bridge ```bash ~# ovs-vsctl add-br br1 ~# ovs-vsctl add-port br1 eth0 - ~# ip addr add / dev br1 ~# ip link set br1 up - ~# ip route add default via dev br1 ``` - Pleade include these commands in your system startup script to ensure they take effect after host restarts. +2. After creating 12-br1.yaml in the /etc/netplan directory, run `netplan apply` to take effect. To ensure that br1 is still available in scenarios such as restarting the host, please check whether the eth0 network card is also managed by netplan. - After creating the bridge, you will be able to view its information on each node: + ```yaml: 12-br1.yaml + network: + version: 2 + renderer: networkd + ethernets: + br1: + addresses: + - "/" # 172.18.10.10/16 + ``` + +3. After creation, you can view the following bridge information on each node: ```bash ~# ovs-vsctl show @@ -80,6 +78,158 @@ Spiderpool can be used as a solution to provide fixed IPs in an Underlay network Port veth97fb4795 Interface veth97fb4795 ovs_version: "2.17.3" + + ~# ip a show br1 + 208: br1: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 + link/ether 00:50:56:b4:5f:fd brd ff:ff:ff:ff:ff:ff + inet 172.18.10.10/16 brd 172.18.255.255 scope global noprefixroute br1 + valid_lft forever preferred_lft forever + inet6 fe80::4f28:8ef1:6b82:a9e4/64 scope link noprefixroute + valid_lft forever preferred_lft forever + ``` + +### Fedora, Centos, etc. use NetworkManager to persist OVS Bridge + +If you use OS such as Fedora, Centos, etc., it is recommended to use NetworkManager persistent OVS Bridge. Persisting OVS Bridge through NetworkManager is a more general method that is not limited to operating systems. + +1. To use NetworkManager to persist OVS Bridge, you need to install the OVS NetworkManager plug-in. The example is as follows: + + ```bash + ~# sudo dnf install -y NetworkManager-ovs + ~# sudo systemctl restart NetworkManager + ``` + +2. Create ovs bridges, ports and interfaces. + + ```bash + ~# sudo nmcli con add type ovs-bridge conn.interface br1 con-name br1 + ~# sudo nmcli con add type ovs-port conn.interface br1-port master br1 con-name br1-port + ~# sudo nmcli con add type ovs-interface slave-type ovs-port conn.interface br1 master br1-port con-name br1-int + ``` + +3. Create another port on the bridge and select the eth0 NIC in the physical device as its Ethernet interface so that real traffic can flow on the network. + + ```bash + ~# sudo nmcli con add type ovs-port conn.interface ovs-port-eth0 master br1 con-name ovs-port-eth0 + ~# sudo nmcli con add type ethernet conn.interface eth0 master ovs-port-eth0 con-name ovs-port-eth0-int + ``` + +4. Configure and activate the ovs bridge. + + Configure the bridge by setting a static IP + + ```bash + ~# sudo nmcli con modify br1-int ipv4.method static ipv4.address "/<子网掩码>" # 172.18.10.10/16 + ``` + + Activate bridge + + ```bash + ~# sudo nmcli con down "eth0" + ~# sudo nmcli con up ovs-port-eth0-int + ~# sudo nmcli con up br1-int + ``` + +5. After creation, you can view information similar to the following on each node. + + ```bash + ~# nmcli c + br1-int dbb1c9be-e1ab-4659-8d4b-564e3f8858fa ovs-interface br1 + br1 a85626c1-2392-443b-a767-f86a57a1cff5 ovs-bridge br1 + br1-port fe30170f-32d2-489e-9ca3-62c1f5371c6c ovs-port br1-port + ovs-port-eth0 a43771a9-d840-4d2d-b1c3-c501a6da80ed ovs-port ovs-port-eth0 + ovs-port-eth0-int 1334f49b-dae4-4225-830b-4d101ab6fad6 ethernet eth0 + + ~# ovs-vsctl show + 203dd6d0-45f4-4137-955e-c4c36b9709e6 + Bridge br1 + Port ovs-port-eth0 + Interface eth0 + type: system + Port br1-port + Interface br1 + type: internal + ovs_version: "3.2.1" + + ~# ip a show br1 + 208: br1: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 + link/ether 00:50:56:b4:5f:fd brd ff:ff:ff:ff:ff:ff + inet 172.18.10.10/16 brd 172.18.255.255 scope global noprefixroute br1 + valid_lft forever preferred_lft forever + inet6 fe80::4f28:8ef1:6b82:a9e4/64 scope link noprefixroute + valid_lft forever preferred_lft forever + ``` + +## Install Spiderpool + +1. Install Spiderpool. + + ```bash + helm repo add spiderpool https://spidernet-io.github.io/spiderpool + helm repo update spiderpool + helm install spiderpool spiderpool/spiderpool --namespace kube-system --set multus.multusCNI.defaultCniCRName="ovs-conf" --set plugins.installOvsCNI=true + ``` + + > If ovs-cni is not installed, you can install it by specifying the Helm parameter `--set plugins.installOvsCNI=true`. + > + > If you are mainland user who is not available to access ghcr.io,You can specify the parameter `-set global.imageRegistryOverride=ghcr.m.daocloud.io` to avoid image pulling failures for Spiderpool. + > + > Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. + +2. Please check if `Spidercoordinator.status.phase` is `Synced`: + + ```shell + ~# kubectl get spidercoordinators.spiderpool.spidernet.io default -o yaml + apiVersion: spiderpool.spidernet.io/v2beta1 + kind: SpiderCoordinator + metadata: + finalizers: + - spiderpool.spidernet.io + name: default + spec: + detectGateway: false + detectIPConflict: false + hijackCIDR: + - 169.254.0.0/16 + hostRPFilter: 0 + hostRuleTable: 500 + mode: auto + podCIDRType: calico + podDefaultRouteNIC: "" + podMACPrefix: "" + tunePodRoutes: true + status: + overlayPodCIDR: + - 10.244.64.0/18 + phase: Synced + serviceCIDR: + - 10.233.0.0/18 + ``` + + At present: + + * Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config ConfigMap. + * If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager Pod. + + If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information. + + If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information: + + ```shell + export POD_SUBNET= + export SERVICE_SUBNET= + cat << EOF | kubectl apply -f - + apiVersion: v1 + kind: ConfigMap + metadata: + name: kubeadm-config + namespace: kube-system + data: + ClusterConfiguration: | + networking: + podSubnet: ${POD_SUBNET} + serviceSubnet: ${SERVICE_SUBNET} + EOF ``` 3. Create a SpiderIPPool instance.