-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concerns about resource exhaustion #2
Comments
Thank you for your thoughts. Could you please elaborate. What do you mean by "Request Queue" and also why will that be better than sleeping? |
I gave it some further thought, A request queue would only work if you modified a lot of things to pull the result after it has been queued on a different HTTP request. I just wish there was a better more async way... sleeping just seems like a bad idea. |
Now I get you. You could basically do that with (the non blocking) I'll give the queue a thought. |
Actually running this on ReactPHP the queue would make sense. In a separate thread you run through the request queue and process them. |
I also have some concerns regarding the scalability of this implementation. At https://github.com/bandwidth-throttle/token-bucket/blob/master/classes/TokenBucket.php#L155 you lock the storage, which makes all other machines/threads wait, while the php-code in the function is running to get the amount of tokens, and store the tokens. We have had issues with a similar implementation, where the load on our systems got so high, that php-machines became slower, and this lock magnified that problem; causing a giant queue, even though our storage (redis) wasn't busy at all. Our solution was to add a function to our storage to getTokenIfAvailable (with the help of a LUA script in Redis). That way we only needed to do one call to the storage, which made the lock superfluous. However our implementation required a dedicated process to fill the bucket with tokens. Combining it with this implementation of converting TokensToSeconds would be the best of both worlds I guess. Another solution we are thinking of right now to avoid the fill job is by "inverting" the bucket: putting tokens in our storage with a Time To Life, and let the storage calculate how many tokens are in the bucket at a given moment, which may not exceed a given maximum. However it may be possible that this might be to much for the storage engine, so I prefer the combined solution. |
If you run into lock contention, CAS might help. I've made already a Also I need to understand better the issue. Is it that you have many concurrent requests on the same bucket and all of them would be eligible for consumption, but are effectively throttled by lock contention? |
That was fixed with #12. |
It is rather trivial to resource exhaust an API using the https://github.com/bandwidth-throttle/token-bucket/blob/master/classes/BlockingConsumer.php
You might consider using a Request Queue with Memcache or Redis if you wanted to implement a better version of BlockingConsumer.
The text was updated successfully, but these errors were encountered: