Flushing db leads to setting huge numbers of _lock keys in distributed architecture even when required key is present #222
Unanswered
snjtkh
asked this question in
Usage Questions
Replies: 1 comment 1 reply
-
cc @jvanasco |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We have like 30 consumers which are processing gcp pubsub data and some 4-5 web apps which are talking to redis. Everything works fine but whenever load spikes on consumers and in between if a key is expired then after setting the missed key(which is used in every consumer) it started setting _lock_key a lot which remains there until its ttl (after one expired another is set for the same key) even when required key is present. We have set expiration_time=-1 for the region.
Setting bulk lock keys also spiking redis memory, isn't it some kind of thunderherd. Is it a bug or i am doing something in wrong way, please help me in understanding this behaviour.
Beta Was this translation helpful? Give feedback.
All reactions