This repository has been archived by the owner on Apr 19, 2024. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Purpose
To change how
GLOBAL
behavior operates. Previously, the owner of the rate limit would broadcast the computed result of the rate limit to other peers, and the computed result from the owner is returned to clients who submit hits to a peer. However, after some great feed back on #218 and #216 It has become clear that we should instead allow the local peers to compute the result of the request based on the hits broadcast by the owning peer.In the new behavior a peer will compute the result of the rate limit request and immediately return that computed result. This means that the non owning peer will compute the result with the current
Remaining
value it currently has in it's cache. To put it another way, the peer cache will no longer hold the computed result from the owner.In order to facilitate this change, I've added many more tests around global functionality which should help ensure we don't break behavior going forward.
Implementation
DRAIN_OVER_LIMIT
behavior when accepting peer rate limits. This allows many peers to send a burst of hits to the owner while still updating the remaining even if the requested hits is more that is available. See Fix global mode #218 (comment)cluster.ListNonOwningDaemons()
to assist in global testingcluster.FindOwningDaemon()
to assist in global testingwaitForBroadcast()
to functional testing suite tools so we can remove the sleep statements.newStaticBuilder()
to daemon.goMustClient()
andClient()
togubernator.Daemon
to make it easier to find and send RPC requests to a daemon instance.RESET_REMAINING
now works correctly with global behaviorDaemon.InstanceID
and AddedInstanceID
to theInstance
TestLeakyBucketGregorian
should no longer occasionally fail the test suite.This PR replaces #218 and #216 but keeps the commits from those PR's
Thank you to these wonderful people, for giving great feed back and working on this!