You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are multiple scenarios where it would be useful to look at external metrics to assess service readiness, in addition to the current logic. For instance, by looking at nginx ingress controller metrics, scenarios such as abnormal performance or request count could be identified and the service determined to be unready.
A way to support this would be to add the ability to query Prometheus metrics, for instance somewhat like KEDA's Prometheus scaler does. When designing this implementation, it should be considered if support for Prometheus queries should be directly baked into k8gb or implemented thru an external call, such as:
This implementation should not address single endpoint readiness. That should be done using standard K8s readiness probes even if calling external tools.
The text was updated successfully, but these errors were encountered:
To profit from this ecosystem we could have a metrics server client in K8GB. Then we would need to query only a single API and could benefit from countless integrations.
This repo may have a client that we can use: https://github.com/kubernetes/metrics. I will read up more in the coming days.
abaguas
added a commit
to abaguas/k8gb
that referenced
this issue
Nov 11, 2024
While investigating how we can add metric based health checks (for k8gb-io#1745) I noticed that we compute health checks twice.
This PR reduces it to a single call.
Signed-off-by: Andre Aguas <[email protected]>
While investigating how we can add metric based health checks (for #1745) I noticed that we compute health checks twice.
This PR reduces it to a single call.
Signed-off-by: Andre Aguas <[email protected]>
There are multiple scenarios where it would be useful to look at external metrics to assess service readiness, in addition to the current logic. For instance, by looking at nginx ingress controller metrics, scenarios such as abnormal performance or request count could be identified and the service determined to be unready.
A way to support this would be to add the ability to query Prometheus metrics, for instance somewhat like KEDA's Prometheus scaler does. When designing this implementation, it should be considered if support for Prometheus queries should be directly baked into k8gb or implemented thru an external call, such as:
This implementation should not address single endpoint readiness. That should be done using standard K8s readiness probes even if calling external tools.
The text was updated successfully, but these errors were encountered: