-
Notifications
You must be signed in to change notification settings - Fork 987
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Periodic flushing / auto-batching #2302
Comments
I'm torn on this one because such a mechanism can easily lead to command timeouts using the synchronous API. Also, we introduce a lot of API that is not used by 99% of all users. I wonder, instead, whether it would make sense to introduce a buffering outbound channel handler that can be optionally installed into the netty channel pipeline that buffers the outgoing write requests. Such an approach could be activated via It could be as well a path forward to refactor our manual command flushing. |
The performance would be terrible, but unless your timeouts are extremely aggressive WRT the flush period it should be functional with the sync API. The
That sounds interesting. I'll see if I can put something together for it. I'm fresh to netty, so it may take some time. If it's created by the |
I'm not sure what the control mechanism would be to enable or disable (or force-write) buffers. |
Seems like a very corner case scenario right now. Leaving in the Icebox and unless we see interest from the community we might have to drop it. If anyone is willing to contribute a solution this would increase the odds of integrating it (the team still has to decide if it makes sense thought) |
I did an evaluation against the solution that I propose (not periodically flush but more like group commit so no such delay thing) vs FlushConsolidationHandler. Here is some interesting results. See MR: #2950 Result: https://github.com/okg-cxf/lettuce-core/blob/test/auto-batch-flush/bench-c5n-2xlarge-exists-b64.bench can see there are still advantage when QPS is high. |
Feature Request (branch)
Hi team, have you considered periodic-flushing/auto-batching as a middle ground between auto and manual flushing?
I'm aware of the
@Batch
annotation for the command interface, but I don't believe there is any way to avoid command timeouts if the batch size isn't reached.I've been looking at the case where many threads make a small number of requests (~10-20) to Redis each before flushing. In this case, there is still significant traffic to Redis in aggregate, but a connection per thread would introduces a lot of overhead, and I believe even pooling is suboptimal due to the batch size.
In a test application with 50 threads, I see batch sizes of 0-30 manually flushing across multiple threads, compared to 300-500 when flushing periodically at 1ms intervals.
Is your feature request related to a problem? Please describe
This is related to the previous issue I opened (#2289) regarding Redis CPU usage with auto-flush enabled.
In our case, we have a large request handling pool, but each thread only issues ~10 requests to Redis before flushing. We found that with auto-flush enabled, each command is written out to the socket individually, and our Redis node was seeing very high CPU usage, which we presume is from reading in commands.
Our current solution is to use a shared connection and manually flush in each thread, which returns CPU usage to what we expect in our node.
Describe the solution you'd like
Advantages:
Drawbacks:
<=5ms
(realistically1-5ms
) proved optimal and had QPS between auto and manual flushing.Describe alternatives you've considered
Manually flushing a shared connection remains an effective option in lieu of this.
My expected use case is when the number of commands issued between flushing for an individual thread is low, but the number of threads sharing the connection is high.
Teachability, Documentation, Adoption, Migration Strategy
If you can, explain how users will be able to use this and possibly write out a version the docs.
Maybe a screenshot or design?
A possible interface:
The text was updated successfully, but these errors were encountered: