-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: rate_limiting
token bucket sub module
#14
Conversation
21506a7
to
4679aba
Compare
4679aba
to
b0889c1
Compare
rate_limiting
libraryrate_limiting
token bucket sub module
crates/rate_limit/src/lib.rs
Outdated
} | ||
|
||
#[tokio::test] | ||
async fn test_token_bucket_many() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test has significantly less coverage than the one from Notify Server. Can we copy those cases directly so we can keep the level of coverage?
Missing:
- refill_in calculation
- checking that tokens recover 1 at a time rather than all at once
- test for separate keys
Why did you remove the ability to mock the clock? That helps ensure that tests aren't flaky as well as I think making the refill_in test case possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to update the test coverage with the missing ones, thanks for pointing out this!
Why did you remove the ability to mock the clock? That helps ensure that tests aren't flaky as well as I think making the refill_in test case possible.
Maybe it's better to pass the timestamp to the function instead of hacking the clock for the test (it looks a bit hacky to me). We can just pass Utc::now().timestamp_millis()
to the function instead of getting it inside of it.
I have updated tests and the function in the 729ec96.
The test coverage now has:
- refill_in calculation assertions,
- checking that tokens recover 1 at a time,
- test for multiple keys running in parallel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you planning to replace the Notify Server implementation with this standard one? If so, the mocked clock is was necessary to be able to test that rate limits are applied to regular endpoints.
It's common practice to mock clocks, I don't see what's "hacky" about that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to apply this unified implementation to the Blockchain-api and Echo server and then probably to the Notify (not on the table yet).
@@ -6,39 +6,24 @@ local refillRate = tonumber(ARGV[3]) -- how many tokens are refilled after each | |||
local now = tonumber(ARGV[4]) -- current timestamp in milliseconds | |||
|
|||
local results = {} | |||
|
|||
for i, key in ipairs(keys) do | |||
for i, key in ipairs(KEYS) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
local
s and I don't think one-time complication or hashing speed really has an effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted it in b9ecde9 if you find it hard to debug the minified version of the Lua script since it's not a huge optimization. Let's stick to the initial one and move forward with it.
This reverts commit 85530b0.
cf32287
to
b9ecde9
Compare
/// Rate limit check using a token bucket algorithm for one key and in-memory | ||
/// cache for rate-limited keys. `mem_cache` TTL must be set to the same value | ||
/// as the refill interval. | ||
pub async fn token_bucket( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mem_cache
TTL must be set to the same value as the refill interval.
What does this mean? Does Cache
support different keys with different refill intervals e.g. for what Notify Server needs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TTL is for all records. If there are requirements to refill different rates for different cases it can be produced with the single refill interval (lowest) and different refill rates by tuning the refill rate to the single refill interval, without adding additional TTL per key.
Description
The rate-limiting token bucket mechanism (based on #231) uses Redis and Moka in-memory cache.
The initial implementation was updated to use the local app cache in Moka and omit the Redis RTT calls (for single key calls) because we know the TTL when the token will be refilled in cases when the key is rate-limited. So we don't need to call the Redis until the TTL expires and we can use the local memory Moka cache. This should significantly eliminate Redis RTT calls in the case of DOS/flood. The initial Lua script was a bit optimized and minified.
How Has This Been Tested?
Due Diligence