Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: handle pod deletion in kubearmor controller #1952

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

Aryan-sharma11
Copy link
Member

@Aryan-sharma11 Aryan-sharma11 commented Jan 24, 2025

Purpose of PR?:

Fixes #1935

Does this PR introduce a breaking change?
No

Changes included
1) Handle pod restarts

  • Currently In case of AppArmor enforcer we delete all existing pods in the cluster to add AppArmor security annotations. Since this can cause a downtime in application the new approach implemented in this PR is that, we will be updating the deployments with annotation "kubearmor.kubernetes.io/restartedAt" to trigger a rolling restart for all the pods.

2) NodeWatcher

  • Currently we have a node watcher implemented in controller which keeps track of AppArmor nodes present on the cluster, now with this PR we will be keeping track of whether kubearmor daemonset is present on the node or not and if present is it in running state or not, so will be adding the AppArmor annotation on the pods on which kubearmor is in running state so to avoid annotating the pods on the nodes which doesn't have kubearmor.

3) Pod Mutation

  • Currently we have pod webhook mutation implemented for pod create and update events, since the event is created before the pod is scheduled on a node, we were missing out on node info, with this PR we will be watching over pod/bindings event since they are generated after the pod is scheduled and before the pod is started so the bindings contains the target node info.
    This will help us in optimising our annotation handling for hybrid clusters (AppArmor + BPFLSM enforcer) therefore improving overall controller performance
  • we will still be watching for pod create events in webhook mutation for the cases when the pod already has nodename at the time of creation as it skips the scheduling step.

ref

@Aryan-sharma11 Aryan-sharma11 marked this pull request as draft January 24, 2025 04:44
@Aryan-sharma11 Aryan-sharma11 force-pushed the fix-pod-deletion branch 2 times, most recently from ae3d494 to ac30593 Compare February 3, 2025 05:45
Signed-off-by: Aryan-sharma11 <[email protected]>
Signed-off-by: Aryan-sharma11 <[email protected]>
Signed-off-by: Aryan-sharma11 <[email protected]>
Signed-off-by: Aryan-sharma11 <[email protected]>
Signed-off-by: Aryan-sharma11 <[email protected]>
Signed-off-by: Aryan-sharma11 <[email protected]>
@Aryan-sharma11 Aryan-sharma11 force-pushed the fix-pod-deletion branch 8 times, most recently from adc8545 to 03e8a60 Compare February 5, 2025 10:05
Signed-off-by: Aryan-sharma11 <[email protected]>
@Aryan-sharma11 Aryan-sharma11 changed the title fix: [WIP] handle pod deletion in kubearmor controller fix: handle pod deletion in kubearmor controller Feb 5, 2025
@Aryan-sharma11 Aryan-sharma11 force-pushed the fix-pod-deletion branch 5 times, most recently from 65bf9a2 to ed093ba Compare February 6, 2025 04:01
@Aryan-sharma11 Aryan-sharma11 marked this pull request as ready for review February 6, 2025 04:04
Signed-off-by: Aryan-sharma11 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kubearmor restarts all pods in a Kubernetes cluster after installation.
1 participant