-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[otel] add loadbalancing exporter component #6315
[otel] add loadbalancing exporter component #6315
Conversation
This pull request does not have a backport label. Could you fix it @rogercoll? 🙏
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Just a clarification, you say:
It gives the ability to route logs, metrics or traces to different OTLP endpoints depending on the configured routing_key . This feature can be helpful when distributing the processing load among different collectors. For example, current APM OTLP data is directly processed with the lsminterval, elastictrace and signaltometrics components, but all those components could be moved from the collection/edge collectors to a set of collector that only handle data transformations. One of the main benefits of the latest approach is the separation of concerns, processing collectors would be part of another resource entity that would be managed independetly.
For routing data to different OTLP endpoints, the routing connector is sufficient, and much simpler. What you really want the load balancing exporter for, is ensuring that data for a given partition (based on an attribute, or context) is always routed to the same instance. That's required for tail sampling traces, and for global metrics aggregations, but not for data routing in general.
Is that correct, or am I misunderstanding what we need this component for?
Correct, thanks for the clarification! At the moment, we are doing some tests using the "service.name" resource attribute as routing key, so their metrics aggregation happens in the same "backend/collector". This is the configuration: loadbalancing/other:
routing_key: "service"
protocol:
otlp:
# all options from the OTLP exporter are supported
# except the endpoint
timeout: 1s
resolver:
# use k8s service resolver, if collector runs in kubernetes environment
k8s:
service: opentelemetry-kube-stack-lb-collector-headless
ports:
- 4317 What we would expect by using this loadbalancing configuration is to forward each service's data to a specific instance of the collector behind the headless K8s service. |
@rogercoll could you add a changelog fragment and rebase? We've had some issues with the CI today, and the current failure should clear if you do that. |
bada6c6
to
23e7e58
Compare
Quality Gate passedIssues Measures |
Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane) |
* feat: add loadbalancing Otel exporter * chore: add fragments changelog file --------- Co-authored-by: Andrzej Stencel <[email protected]> (cherry picked from commit dbfd447)
* feat: add loadbalancing Otel exporter * chore: add fragments changelog file --------- Co-authored-by: Andrzej Stencel <[email protected]> (cherry picked from commit dbfd447) # Conflicts: # go.mod # internal/pkg/otel/components.go
* feat: add loadbalancing Otel exporter * chore: add fragments changelog file --------- Co-authored-by: Andrzej Stencel <[email protected]> (cherry picked from commit dbfd447) Co-authored-by: Roger Coll <[email protected]>
…6468) * [otel] add loadbalancing exporter component (#6315) * feat: add loadbalancing Otel exporter * chore: add fragments changelog file --------- Co-authored-by: Andrzej Stencel <[email protected]> (cherry picked from commit dbfd447) # Conflicts: # go.mod # internal/pkg/otel/components.go * conflicts --------- Co-authored-by: Roger Coll <[email protected]> Co-authored-by: Michal Pristas <[email protected]>
What does this PR do?
Adds the Otel loadbalancing exporter to the elastic-agent.
Why is it important?
It gives the ability to route logs, metrics or traces to different OTLP endpoints depending on the configured
routing_key
. This feature can be helpful when distributing the processing load among different collectors. For example, current APM OTLP data is directly processed with the lsminterval, elastictrace and signaltometrics components, but all those components could be moved from the collection/edge collectors to a set of collector that only handle data transformations. One of the main benefits of the latest approach is the separation of concerns, processing collectors would be part of another resource entity that would be managed independetly.Sample configuration that routes traces based on their service.name key:
Checklist
./changelog/fragments
using the changelog toolDisruptive User Impact
How to test this PR locally
Related issues
Questions to ask yourself