Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concerns about resource exhaustion #2

Open
geggleto opened this issue Aug 19, 2015 · 7 comments
Open

Concerns about resource exhaustion #2

geggleto opened this issue Aug 19, 2015 · 7 comments

Comments

@geggleto
Copy link

It is rather trivial to resource exhaust an API using the https://github.com/bandwidth-throttle/token-bucket/blob/master/classes/BlockingConsumer.php

You might consider using a Request Queue with Memcache or Redis if you wanted to implement a better version of BlockingConsumer.

@malkusch
Copy link
Contributor

Thank you for your thoughts. Could you please elaborate. What do you mean by "Request Queue" and also why will that be better than sleeping?

@geggleto
Copy link
Author

I gave it some further thought, A request queue would only work if you modified a lot of things to pull the result after it has been queued on a different HTTP request.

I just wish there was a better more async way... sleeping just seems like a bad idea.

@malkusch
Copy link
Contributor

A request queue would only work if you modified a lot of things to pull the result after it has been queued on a different HTTP request.

Now I get you. You could basically do that with (the non blocking) TokenBucket alone and just return a 429. The only issue here is that as there's no queue, there's a chance of starvation.

I'll give the queue a thought.

@geggleto
Copy link
Author

Actually running this on ReactPHP the queue would make sense.
You queue the request and instead of sleeping add it to a queue.

In a separate thread you run through the request queue and process them.

@lvanderree
Copy link

I also have some concerns regarding the scalability of this implementation.

At https://github.com/bandwidth-throttle/token-bucket/blob/master/classes/TokenBucket.php#L155 you lock the storage, which makes all other machines/threads wait, while the php-code in the function is running to get the amount of tokens, and store the tokens.

We have had issues with a similar implementation, where the load on our systems got so high, that php-machines became slower, and this lock magnified that problem; causing a giant queue, even though our storage (redis) wasn't busy at all.

Our solution was to add a function to our storage to getTokenIfAvailable (with the help of a LUA script in Redis). That way we only needed to do one call to the storage, which made the lock superfluous.

However our implementation required a dedicated process to fill the bucket with tokens. Combining it with this implementation of converting TokensToSeconds would be the best of both worlds I guess.

Another solution we are thinking of right now to avoid the fill job is by "inverting" the bucket: putting tokens in our storage with a Time To Life, and let the storage calculate how many tokens are in the bucket at a given moment, which may not exceed a given maximum. However it may be possible that this might be to much for the storage engine, so I prefer the combined solution.

@malkusch
Copy link
Contributor

malkusch commented Sep 28, 2016

If you run into lock contention, CAS might help. I've made already a CASMutex, but the token bucket is not designed for that. I might consider an integration in the future. But then again, you would exchange locking against CPU burning. The contention point is still there.

Also I need to understand better the issue. Is it that you have many concurrent requests on the same bucket and all of them would be eligible for consumption, but are effectively throttled by lock contention?

@malkusch
Copy link
Contributor

If you run into lock contention, CAS might help. I've made already a CASMutex, but the token bucket is not designed for that.

That was fixed with #12.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants