-
Notifications
You must be signed in to change notification settings - Fork 6
Deadlock/Lock when executing too many concurrent similar jobs #3
Comments
Can you make an estimate at how many Since it's using redis' pubsub and it's just waiting on messages, maybe I can rework it to use a single subscriber connection, which should stop it from exhausting the connection pool. In the mean time, can you increase the connection pool? |
I think there are two problems with the approach of waiting for subscribers. The first one is the connection_pool that is easily fixed with a refactor to use a single connection as you explained, but there is also another issue. Just for sake of context, my application is consuming the Twitter stream, so it have a huge volume of an infinite stream. The main problem relies on the incapacity to process a stream(continuous process) that have a higher volume than the the Sidekiq is able to consume. In this case, with the first code I posted ConvertMyMessageWorker.as_promise(message['id']).then do
PersistToDBWorker.perform_async(message['id'])
end We may end up with all the workers in Sidekiq, working on Am I correct in my assumption? Again, sorry for bothering you, I'm just very interested in use this Gem that makes background processing so clean and add a lot of possibilities when using Sidekiq. |
I'm not so sure that this is the case. Since |
I haven't forgotten about this - I've been trying to come up with a nice way to share Redis connections, but haven't had a lot of luck due to Celluloid being interesting. In the mean time I have a simplest possible fix that I can do. Standby for action. |
Okay, so I was forced to rework this and should be fixed as of 5151445 - can you test if you're still so inclined? |
Sorry, I had to abandon the project, due to the issue open. My project was critical and I had to move with other solution. Though as I still think this is the best approach to tackle Sidekiq Jobs dependencies, I'll give it a try again and let you know. |
My application have a high volume of messages, and my jobs have this pattern:
So when the system starts queuing messages, it may happen that it got n enqueued
ConvertMyMessageWorker
where n is the total amount ofSidekiq
concurrency.When that happens, the next jobs to be enqueued try to get a connection from Redis pool and we got an exception similar to the following one:
Do you have any idea for a workaround or a fix to this issue?
Regards,
The text was updated successfully, but these errors were encountered: