Replies: 4 comments
-
As I go back and read through to see what I might be misunderstanding, this was a paragraph that had me originally remove connection pooling altogether. I assume that, since we're fully async, we're ultimately better off not using connection pooling at all... especially since it does not seem to have improved things. |
Beta Was this translation helpful? Give feedback.
-
Also, some feedback on the docs, but from the docs on client configuration, this particular paragraph on
Specifically, what does this mean?
If it doesn't reflect the actual number of threads, what does the number I'm setting actually mean and how should I see it reflected if not by some threads? 🤔 Clearly, I see some |
Beta Was this translation helpful? Give feedback.
-
Well, I might have found the answer here:
So, if I'm understanding correctly, because we're talking to a Redis cluster the connection patterns completely change and none of the other stuff I've mentioned matters because this is how cluster connections work where only Can anyone from the team confirm this? |
Beta Was this translation helpful? Give feedback.
-
FWIW, I'm convinced that the conclusions I've drawn in this thread are accurate and it's backed up by what I'm actually seeing at runtime. Hopefully this helps some others who might be confused, but going to close the question out. |
Beta Was this translation helpful? Give feedback.
-
Setup:
3
node Redis clusterioThreadPoolSize
of8
computationThreadPoolSize
of32
publishOnScheduler
to true to get all deserialization off NIO threads3
and max of12
Spring configuration:
I have confirmed that I see this logic running and can confirm, through the debugger, that it is telling the client it should use a thread pool of
8
NioEventLoop
instances and this seems to bear out:I can also see that it has clearly adopted my connection pooling policy of
3
min,12
max, etc:Despite the configuration above, I only ever see
3
Lettuce NIO threads being spun up and used under our load tests:This is a 1min slice under heavy load and here's a detailed breakdown of what the threads are doing during that time:
We are hitting bottleneck pretty quickly where the command queues are backing up because the NIO threads are just maxed out trying to keep up with the volume of commands. Meanwhile our Redis nodes are only at like 15% and the rest of the pod's resources are fine.
The documentation is not very clear on this TBH, but my understanding at the moment is that, to achieve max throughput, we want a combination of both more connections as well as more NIO threads. This is because connections can only use one NIO thread at a time, but a NIO thread can serve multiple connections over time so it need not be 1:1. I've, of course, consulted our AI overlords, but they only seem to confirm my understanding is accurate. That said, clearly there are
3
Redis nodes and I'm ending up with3
NIO threads only ever in use, so perhaps I'm missing something about the math or just not interpreting the docs correctly.Finally, to head off any questions, there was a point where some of the code was using
executeInSession
which I understand would force some sort of sequential, connection bound execution, but all of the code at this point is read-only commands executed in parallel at all times using proper reactive patterns. If it weren't for the limitation and subsequent saturation of the3
NIO threads everything else seems perfect.Any advice welcome and if I can provide any more detail please lmk!
Beta Was this translation helpful? Give feedback.
All reactions