-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Helm] Missing hostNetwork: true for managed agent in perNode preset #6324
Comments
This is the biggest reason we do this by default for Fleet managed agents, Fleet identifies agent by hostname in most of the UI and the pod names aren't stable and change constantly. Using the node hostname solves this. |
sure the daemonset of fleet mode having the
right? |
Right yes, we could change that port 6789 to port 0 in the agent configuration but this would be a general source of surprising conflict for people in other integrations that bind to network ports. So best to leave hostNetwork false unless it solves an actual functional problem for data collection, which hostname gets displayed is more of a nice to have for the UI. |
even if we switched to port 0, would that mean that two agent would appear with the same name on Fleet?! Is this something we would want? I am assuming no, but assumptions are a little bit tricky thus, If we are okay with it, then maybe we can utilise other techniques e.g. env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName to get the node name without hostNetwork 🙂 However if a pod goes to another Node you know... The same can happen even with hostNetwork: true for anything else than a Daemonset |
Yeah this breaks down completely once the agent isn't a DaemonSet. |
then I believe the best way forward is to have this enabled by default for the DaemonSet (aka perNode preset) used by kubernetes and system integrations |
Sounds good to me. |
I think the reason to use The only legitimate case that I can think of is a DaemonSet with system and kubernetes integration enabled. I don't see any need on doing it for a deployment in charge of KSM metrics or any other elastic agent, but I could be wrong. The issue you mention about getting new agent names over and over in Fleet UI when they are ephemeral and associated to pods with dynamic names should be totally ok, as they are actually ephemeral. In 8.16, the new policy settings for automatically removing inactive agents works pretty well on Kubernetes, so I wouldn't consider that annoying anymore. |
When following the instructions at https://www.elastic.co/guide/en/fleet/current/example-kubernetes-fleet-managed-agent-helm.html and installing a Fleet Managed with preset
perNode
The agent works fine but it cannot access the
kubelet
endpoint when configuring k8s integration:If we want this preset to work with the default values of the k8s integration we should probably add
hostNetwork: true
.The hostNetwork true I believe it's also needed to perform the
system
monitoring of the network interfaces.If we don't want to use
hostNetwork: true
(which I would love to), then we have to determine and document how to perform the kubelet monitoring and the system interfaces monitoring.hostNetwork
is apparently needed to also show the real hostnames in monitoring data instead of the pod name, which is important for infrastructure monitoring.cc: @pkoutsovasilis ;)
The text was updated successfully, but these errors were encountered: