-
-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New ways to prevent starvation of throttled calls #32
Comments
The desired throttling rate is NOT necessarily related to the "resolution" rate - there may well be back-end operations that occur AFTER the request is resolved, which STILL requires the requests to be throttled (firebase firestore document writes are a great example of this). The consequence of this is that you'd need knowledge OTHER THAN the fact that the request resolved - which is exactly what the throttling rate is assumed to be. The "full" system is NOT idle - the code using p=throttle might be, but you have no way to know the other end is. This is an attempt at solving an non-problem. |
Thanks @LeadDreamer, appreciate your response! My take is that I'm not trying to control the whole system (throttled client and rate-limited server), just to allow more flexibility on the client side to deal with real-life issues. Totally open to discussing the viability of the problem itself - the best solutions involve 0 lines of code :) I'll give more details as to the issue I'm facing. Let's say I'm developing a service that provides temperature data based on geo-location. I have access to two public services to fulfill these requests. The first service, A, is fast to respond, doesn't have rate-limiting, but its data is very rough. Given a geo-location, it responds with a range of max and min temperatures, based on historic data. There is no computation involved, excepting looking up the data in a fixed-size dataset. The second service, B, is slower to respond, and rate-limits requests to 1 per second. It constantly computes weather data, and provides very accurate temperature readings. For every request, my service would then fire two API calls to services A and B. If B responds quickly enough, its response is sent to my service's client. If it takes longer, my service responds with A's reply. Once B's reply comes in, it is cached locally for some time, to be used next time. I'll use Now if my service gains popularity, and starts seeing 2 requests per second, the implementation above causes starvation. As time goes by, my clients are less and less likely to benefit from service B's accuracy. There are simply too many pending requests in the throttling mechanism, which becomes a queue in my service with no observability or control (under my naive implementation). Thanks for reading this far, any thoughts on this? Perhaps I'm barking at the wrong tree even, and |
p-throttle is definitely not the right solution to what you describe. Your issue is trying to "scale" using a service that is SPECIFICALLY rate limited to 1 per second - which is CLEARLY trying to tell you that it is NOT a scalable solution. I'm guessing it's a "free" tier that is so limited? Either way, if your intent is to scale then you need to find or pay for a scalable source. |
Thanks. I'd love to get additional opinions here, will leave the issue open for now. |
Continuing #2, I am concerned with the simple scenario where the rate at which executions are being made exceeds the rate at which they are resolved due to the throttling config. This time, the concern is not around memory exhaustion, but around the ever-growing delay in executing the next call being made.
Using the
abort
function is a way to deal with this situation, with two drawbacks as I see them:pThrottle.AbortError
. I'd love to have an option to have aborted executions resolve with a value that depends on the execution arguments.As a solution, I'm basically thinking of adding a way to peek into the throttledFn object, and extending the
abort
function with the followingoptions
argument:Would appreciate feedback on relevance, direction, and anything else.
The text was updated successfully, but these errors were encountered: