Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Node Slice Fast IPAM does not work with large number of pods #558

Open
adilGhaffarDev opened this issue Feb 10, 2025 · 1 comment
Open

Comments

@adilGhaffarDev
Copy link

adilGhaffarDev commented Feb 10, 2025

Describe the bug
When scaling a deployment to 100 replicas with multiple NetworkAttachmentDefinitions (around 8) in its annotations and Fast IPAM enabled, the scaling process stalls at approximately 50 pods.

Expected behavior
We should be able to scale the deployment up to the maximum limit allowed per node.
This same scenario works perfectly fine if we disable Node Slice Fast IPAM.

To Reproduce
Steps to reproduce the behavior:

  • run hack/e2e-setup-kind-cluster.sh -n 3

  • Once cluster is ready deploy NADs:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: range1
  namespace: kube-system
spec:
  config: '{
    "type": "macvlan",
    "cniVersion": "0.3.1",
    "name": "macvlan",
    "ipam": {
      "type": "whereabouts",
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "debug",
      "range": "2.2.2.0/24",
      "node_slice_size": "/26"
    }}'
---

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: range2
  namespace: kube-system
spec:
  config: '{
    "type": "macvlan",
    "cniVersion": "0.3.1",
    "name": "macvlan",
    "ipam": {
      "type": "whereabouts",
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "debug",
      "range": "3.3.3.0/24",
      "node_slice_size": "/26"
    }}'
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: range3
  namespace: kube-system
spec:
  config: '{
    "type": "macvlan",
    "cniVersion": "0.3.1",
    "name": "macvlan",
    "ipam": {
      "type": "whereabouts",
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "debug",
      "range": "4.4.4.0/24",
      "node_slice_size": "/26"
    }}'
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: range4
  namespace: kube-system
spec:
  config: '{
    "type": "macvlan",
    "cniVersion": "0.3.1",
    "name": "macvlan",
    "ipam": {
      "type": "whereabouts",
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "debug",
      "range": "5.5.5.0/24",
      "node_slice_size": "/26"
    }}'
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: range5
  namespace: kube-system
spec:
  config: '{
    "type": "macvlan",
    "cniVersion": "0.3.1",
    "name": "macvlan",
    "ipam": {
      "type": "whereabouts",
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "debug",
      "range": "6.6.6.0/24",
      "node_slice_size": "/26"
    }}'
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: range6
  namespace: kube-system
spec:
  config: '{
    "type": "macvlan",
    "cniVersion": "0.3.1",
    "name": "macvlan",
    "ipam": {
      "type": "whereabouts",
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "debug",
      "range": "7.7.7.0/24",
      "node_slice_size": "/26"
    }}'
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: range7
  namespace: kube-system
spec:
  config: '{
    "type": "macvlan",
    "cniVersion": "0.3.1",
    "name": "macvlan",
    "ipam": {
      "type": "whereabouts",
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "debug",
      "range": "8.8.8.0/24",
      "node_slice_size": "/26"
    }}'
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: range8
  namespace: kube-system
spec:
  config: '{
    "type": "macvlan",
    "cniVersion": "0.3.1",
    "name": "macvlan",
    "ipam": {
      "type": "whereabouts",
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "debug",
      "range": "9.9.9.0/24",
      "node_slice_size": "/26"
    }}'
---
  • Now deploy the nginx deployment that we will scale:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: n-dep
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
      annotations:
        k8s.v1.cni.cncf.io/networks: |-
          [
            { 
              "name": "range1", 
              "namespace": "kube-system"
            },
            { 
              "name": "range2", 
              "namespace": "kube-system"
            },
            { 
              "name": "range3", 
              "namespace": "kube-system"
            },
            { 
              "name": "range4", 
              "namespace": "kube-system"
            },
            { 
              "name": "range5", 
              "namespace": "kube-system"
            },
            { 
              "name": "range6", 
              "namespace": "kube-system"
            },
            { 
              "name": "range7", 
              "namespace": "kube-system"
            },
            { 
              "name": "range8", 
              "namespace": "kube-system"
            }
          ]
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
  • Scale nginx deployment to 100:
    kubectl scale deployment --replicas=100 n-dep

This will get stuck after 40-45 pods.

In whereabouts-controller I see following error:

2025-02-10T18:28:43Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T18:28:43Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
E0210 18:28:43.908723       1 controller.go:285] "Unhandled Error" err="error syncing 'kube-system/range2': found IPAM conf mismatch for network-attachment-definitions with same network name, requeuing" logger="UnhandledError"
2025-02-10T18:28:43Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T18:28:43Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
E0210 18:28:43.937388       1 controller.go:285] "Unhandled Error" err="error syncing 'kube-system/range3': found IPAM conf mismatch for network-attachment-definitions with same network name, requeuing" logger="UnhandledError"
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
E0210 18:28:44.029079       1 controller.go:285] "Unhandled Error" err="error syncing 'kube-system/range4': found IPAM conf mismatch for network-attachment-definitions with same network name, requeuing" logger="UnhandledError"
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
E0210 18:28:44.058722       1 controller.go:285] "Unhandled Error" err="error syncing 'kube-system/range5': found IPAM conf mismatch for network-attachment-definitions with same network name, requeuing" logger="UnhandledError"
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
E0210 18:28:44.063360       1 controller.go:285] "Unhandled Error" err="error syncing 'kube-system/range6': found IPAM conf mismatch for network-attachment-definitions with same network name, requeuing" logger="UnhandledError"
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
E0210 18:28:44.066067       1 controller.go:285] "Unhandled Error" err="error syncing 'kube-system/range7': found IPAM conf mismatch for network-attachment-definitions with same network name, requeuing" logger="UnhandledError"
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T18:28:44Z [debug] Used defaults from parsed flat file config @ /host/etc/cni/net.d/whereabouts.d/whereabouts.conf
E0210 18:28:44.067507       1 controller.go:285] "Unhandled Error" err="error syncing 'kube-system/range8': found IPAM conf mismatch for network-attachment-definitions with same network name, requeuing" logger="UnhandledError"

I see the following error in Whereabouts DaemonSet pods:

2025-02-10T14:46:25Z [debug] using pool identifier: {2.2.2.128/26  whereabouts-worker}
2025-02-10T14:46:25Z [error] IPAM error reading pool allocations (attempt: 19): k8s get error: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
2025-02-10T14:46:25Z [debug] IPManagement: [], k8s get error: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
2025-02-10T14:46:25Z [error] Error at storage engine: k8s get error: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
2025-02-10T14:46:26Z [debug] Used defaults from parsed flat file config @ /etc/cni/net.d/whereabouts.d/whereabouts.conf
2025-02-10T19:05:45Z [debug] Get overlappingRangewide allocation; normalized IP: "3.3.3.13", IP: "3.3.3.13", networkName: ""
2025-02-10T19:05:45Z [error] k8s get OverlappingRangeIPReservation error: Get "https://10.96.0.1:443/apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/kube-system/overlappingrangeipreservations/3.3.3.13": context deadline exceeded
2025-02-10T19:05:45Z [error] Error getting cluster wide IP allocation: k8s get OverlappingRangeIPReservation error: Get "https://10.96.0.1:443/apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/kube-system/overlappingrangeipreservations/3.3.3.13": context deadline exceeded
2025-02-10T19:05:45Z [debug] IPManagement: [], k8s get OverlappingRangeIPReservation error: Get "https://10.96.0.1:443/apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/kube-system/overlappingrangeipreservations/3.3.3.13": context deadline exceeded
2025-02-10T19:05:45Z [error] Error at storage engine: k8s get OverlappingRangeIPReservation error: Get "https://10.96.0.1:443/apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/kube-system/overlappingrangeipreservations/3.3.3.13": context deadline exceeded
2025-02-10T19:05:45Z [debug] Used defaults from parsed flat file config @ /etc/cni/net.d/whereabouts.d/whereabouts.conf

Environment:

  • Whereabouts version : main
  • Kubernetes version (use kubectl version): v1.32.0
  • Network-attachment-definition: Added in reproduction steps
  • Whereabouts configuration (on the host):
{
  "datastore": "kubernetes",
  "kubernetes": {
    "kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig"
  },
  "reconciler_cron_expression": "30 4 * * *"
}

  • OS (e.g. from /etc/os-release): Ubuntu
  • Kernel (e.g. uname -a): 5.15.0-130-generic Ubuntu SMP Wed Dec 18 17:59:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
  • Others: N/A

Additional info / context
Add any other information / context about the problem here.

@adilGhaffarDev
Copy link
Author

cc @ivelichkovich Please check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant