-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(k8s): implement wildcard All Namespaces discovery #725
base: main
Are you sure you want to change the base?
Conversation
a9275b3
to
1159e33
Compare
This PR/issue depends on:
|
/build_test |
Workflow started at 11/28/2024, 3:10:23 PM. View Actions Run. |
No OpenAPI schema changes detected. |
No GraphQL schema changes detected. |
1159e33
to
908deda
Compare
CI build and push: All tests pass ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just have some questions while checking cryostatio/cryostat-helm#213 :D
// TODO we should not need to force manual re-syncs this way - the Informer is already | ||
// supposed to resync itself. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious here: Is it now safe to remove this manual resync? If not, in the case of all namespaces, we should also list namespaces and do resync too?
.inAnyNamespace() | ||
.inform( | ||
KubeApiDiscovery.this, | ||
informerResyncPeriod.toMillis())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, add a logger.debugv
here?
logger.debugv(
"Started Endpoints SharedInformer for all namespace with resync period {0}",
informerResyncPeriod);
private boolean watchAllNamespaces() { | ||
return kubeConfig.getWatchNamespaces().stream().anyMatch(ns -> ALL_NAMESPACES.equals(ns)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: make sense to bring this method and ALL_NAMESPACES
into KubeConfig
class?
Welcome to Cryostat! 👋
Before contributing, make sure you have:
main
branch[chore, ci, docs, feat, fix, test]
To recreate commits with GPG signature
git fetch upstream && git rebase --force --gpg-sign upstream/main
Fixes: #722
Depends on #689
Based on #689
Description of the change:
Handles the
*
value in Kubernetes discovery namespaces as a wildcard indicating "All Namespaces". When this is found then, rather than creating an Endpoints Informer for each target namespace, Cryostat creates a single Informer for Endpoints in any namespace in the cluster. A few pieces of supporting logic are modified to suit this change, ex. grabbing the client-side cache (in Cryostat application memory) and doing per-namespace filtering of Endpoints objects found in that cluster-wide Informer, rather than having each Informer's cache already be segmented by namespace.Motivation for the change:
When suitably deployed in a Kubernetes cluster - with accompanying ClusterRole(Binding) to allow the serviceaccount to discover Endpoints cluster-wide - this allows the user to install a single Cryostat instance and have visibility of applications in any namespace. This also includes the possibility that namespaces are created or deleted within Cryostat's lifetime. Users should take care to carefully select the port names/numbers that will be deemed as compatible targets, since Cryostat will see every single Endpoints object in the cluster. Cryostat may also see a large number of update events from Kubernetes if the cluster is large and active with many Endpoints changes, which may be burdensome and resource intensive.
How to manually test: