[testbed] Add batcher perf tests for heavy processing #36901
+133
−146
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Add batching performance test simulating heavy processing. The primary intent of these is to help verify that we're not introducing performance regressions with open-telemetry/opentelemetry-collector#8122.
I've added two additional benchmarks:
The idea here is to capture both the overhead of doing a lot of work in a single processor, and the function call overhead of many processors.
I've also refactored the tests to get rid of duplication when generating test cases.
Current results
This benchmark was run with the
exporter.UsePullingBasedExporterQueueBatcher
feature gate enabled.Relative to the batch processor, the new exporter batcher loses when there are many processors in the pipeline. I think this is just due to nested function call overhead, but haven't investigated very deeply. Worth noting that if I bump the initial batch size to 100, the differences basically go away.
Link to tracking issue
Fixes open-telemetry/opentelemetry-collector#10836