You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
We are having a situation in our test cluster where Pod references left undeleted during scale down operation of a deployment. It get increased if we do multiple scale down/up operation. After scaling down of deployment from 200 to 1, most of the pods stays in Terninating state and needs long time to get deleted from the cluster. After the deletion some of the pod references are still visible and may increase after multiple scale up/down. Here is listed some queries during the process:
Scale down form 200 to 1 and when most of the pods are in Terminating state
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 2.2.2.0-24 -o yaml | grep -c podref
Thu Jun 20 06:56:08 UTC 2024
161
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 3.3.3.0-24 -o yaml | grep -c podref
Thu Jun 20 06:53:34 UTC 2024
121
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 4.4.4.0-24 -o yaml | grep -c podref
Thu Jun 20 06:53:55 UTC 2024
66
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 5.5.5.0-24 -o yaml | grep -c podref
Thu Jun 20 06:54:36 UTC 2024
11
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 6.6.6.0-24 -o yaml | grep -c podref
Thu Jun 20 06:54:56 UTC 2024
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 7.7.7.0-24 -o yaml | grep -c podref
Thu Jun 20 06:55:15 UTC 2024
1
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 8.8.8.0-24 -o yaml | grep -c podref
Thu Jun 20 06:55:35 UTC 2024
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 9.9.9.0-24 -o yaml | grep -c podref
Thu Jun 20 06:55:55 UTC 2024
1
When scale down from 200 to 1 is completed and all replicas are deleted
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 2.2.2.0-24 -o yaml | grep -c podref
Thu Jun 20 08:30:53 UTC 2024
2
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 3.3.3.0-24 -o yaml | grep -c podref
Thu Jun 20 08:32:41 UTC 2024
2
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 4.4.4.0-24 -o yaml | grep -c podref
2
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 5.5.5.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 6.6.6.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 7.7.7.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 8.8.8.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 9.9.9.0-24 -o yaml | grep -c podref
1
We can see one extra pod reference is visible in 2.2.2.0/24, 3.3.3.0/24 and 4.4.4.0/24 ranges. This will grow in case we do multiple scale up/down operation on deployment.
Expected behavior
All podReference of deleted pods should be removed from the list and keep visible only running pod's podReference. For example expected output in this case where only one pod is running after scale down should be:
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 2.2.2.0-24 -o yaml | grep -c podref
Thu Jun 20 08:30:53 UTC 2024
1
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 3.3.3.0-24 -o yaml | grep -c podref
Thu Jun 20 08:32:41 UTC 2024
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 4.4.4.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 5.5.5.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 6.6.6.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 7.7.7.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 8.8.8.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 9.9.9.0-24 -o yaml | grep -c podref
1
To Reproduce
Steps to reproduce the behavior:
Test with the whereabouts own test with kind by running make kind (1 control plane and 2 worker)
Apply the NetworkAttachmentDefinition with different range such as range1, range2....range8
Apply a deployment and scale it to 200
Scale down the deployment to 1. Wait for all pods to get removed completely. Pods stays hanging long time in Terminating state and get eventually removed
Then check ippools using the following command: kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 9.9.9.0-24 -o yaml | grep -c podref , we can see extra podReference visible which should get removed after deletion of a pod.
Environment:
Whereabouts version : Latest
Kubernetes version (use kubectl version): 1.30
Network-attachment-definition: 8 ranges applied from 2.2.2.0/24-9.9.9.0/24
Whereabouts configuration (on the host): N/A
OS (e.g. from /etc/os-release): N/A
Kernel (e.g. uname -a): N/A
Others: N/A
Additional info / context
Add any other information / context about the problem here.
The text was updated successfully, but these errors were encountered:
Describe the bug
We are having a situation in our test cluster where Pod references left undeleted during scale down operation of a deployment. It get increased if we do multiple scale down/up operation. After scaling down of deployment from 200 to 1, most of the pods stays in
Terninating
state and needs long time to get deleted from the cluster. After the deletion some of the pod references are still visible and may increase after multiple scale up/down. Here is listed some queries during the process:Scale down form 200 to 1 and when most of the pods are in
Terminating
stateWhen scale down from 200 to 1 is completed and all replicas are deleted
We can see one extra pod reference is visible in 2.2.2.0/24, 3.3.3.0/24 and 4.4.4.0/24 ranges. This will grow in case we do multiple scale up/down operation on deployment.
Expected behavior
All podReference of deleted pods should be removed from the list and keep visible only running pod's podReference. For example expected output in this case where only one pod is running after scale down should be:
To Reproduce
Steps to reproduce the behavior:
make kind
(1 control plane and 2 worker)kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 9.9.9.0-24 -o yaml | grep -c podref
, we can see extra podReference visible which should get removed after deletion of a pod.Environment:
kubectl version
): 1.30uname -a
): N/AAdditional info / context
Add any other information / context about the problem here.
The text was updated successfully, but these errors were encountered: