[ShareChat] Introduced the concept of uniform parallelism #1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Context
In Tardis, the autoscale struggled with finding the right balance. The issue is that in the heterogeneous parallelism across all vertices the decisions of autoscaler can be suboptimal. Vertices are not independent, and the current parallelism of "parent" vertex influences how much traffic receives a "child" vertex, hence impacts the decision when we decide new parallelims of the "child" vertex. Basically - the relative parallelism of vertices can change after scale event, and in the new - changed - situation the decision about the optimal scaling can be very different.
How it looks like in practice is tons of "bouncing": autoscaler scales down, then quickly realizes it needs to upscale back. And this process never ends, no matter how hard we try to tune parameters.
This PR
Introduces the concept of "flat parallelism". To reduce the "cognitive load" on autoscaler and prevent the situation when relative parallelism changes over time, we can simply maintain the same parallelism across all vertices. Pretty much like we do now in Tardis.
With this setting, Tardis job autoscales perfectly, maintaining small lag without "bouncing" back situations.