Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New ways to prevent starvation of throttled calls #32

Open
adrukh opened this issue Aug 14, 2021 · 4 comments
Open

New ways to prevent starvation of throttled calls #32

adrukh opened this issue Aug 14, 2021 · 4 comments

Comments

@adrukh
Copy link

adrukh commented Aug 14, 2021

Continuing #2, I am concerned with the simple scenario where the rate at which executions are being made exceeds the rate at which they are resolved due to the throttling config. This time, the concern is not around memory exhaustion, but around the ever-growing delay in executing the next call being made.

Using the abort function is a way to deal with this situation, with two drawbacks as I see them:

  • it is hard to employ this 'flushing' with precision. For example, periodically checking the throttler to see how many pending executions are there, 'flushing' only several of them, choosing whether to 'flush' by FIFO or LIFO, etc.
  • when pending executions are aborted, they reject with an pThrottle.AbortError. I'd love to have an option to have aborted executions resolve with a value that depends on the execution arguments.

As a solution, I'm basically thinking of adding a way to peek into the throttledFn object, and extending the abort function with the following options argument:

throttledFn.countPending(): Number

throttledFn.abort({
  count: Number, // default undefined, meaning to abort all pending executions
  applyToLastInvocations: Boolean, // default false, applies to earliest invocations  
  resolveWithValue: Function, // a non-async function that takes the `Fn` arguments of each execution, and resolves the aborted execution with `resolveWithValue.apply(args)`. Default undefined, meaning to reject with `pThrottle.AbortError`
})

Would appreciate feedback on relevance, direction, and anything else.

@LeadDreamer
Copy link

The desired throttling rate is NOT necessarily related to the "resolution" rate - there may well be back-end operations that occur AFTER the request is resolved, which STILL requires the requests to be throttled (firebase firestore document writes are a great example of this). The consequence of this is that you'd need knowledge OTHER THAN the fact that the request resolved - which is exactly what the throttling rate is assumed to be. The "full" system is NOT idle - the code using p=throttle might be, but you have no way to know the other end is.

This is an attempt at solving an non-problem.

@adrukh
Copy link
Author

adrukh commented Aug 15, 2021

Thanks @LeadDreamer, appreciate your response!

My take is that I'm not trying to control the whole system (throttled client and rate-limited server), just to allow more flexibility on the client side to deal with real-life issues. Totally open to discussing the viability of the problem itself - the best solutions involve 0 lines of code :)

I'll give more details as to the issue I'm facing. Let's say I'm developing a service that provides temperature data based on geo-location. I have access to two public services to fulfill these requests. The first service, A, is fast to respond, doesn't have rate-limiting, but its data is very rough. Given a geo-location, it responds with a range of max and min temperatures, based on historic data. There is no computation involved, excepting looking up the data in a fixed-size dataset. The second service, B, is slower to respond, and rate-limits requests to 1 per second. It constantly computes weather data, and provides very accurate temperature readings.

For every request, my service would then fire two API calls to services A and B. If B responds quickly enough, its response is sent to my service's client. If it takes longer, my service responds with A's reply. Once B's reply comes in, it is cached locally for some time, to be used next time. I'll use p-throttle not to overload service B, limiting my request rate from B to 1 per second at most.

Now if my service gains popularity, and starts seeing 2 requests per second, the implementation above causes starvation. As time goes by, my clients are less and less likely to benefit from service B's accuracy. There are simply too many pending requests in the throttling mechanism, which becomes a queue in my service with no observability or control (under my naive implementation).

Thanks for reading this far, any thoughts on this? Perhaps I'm barking at the wrong tree even, and p-throttle is not the right solution to this challenge?

@LeadDreamer
Copy link

p-throttle is definitely not the right solution to what you describe. Your issue is trying to "scale" using a service that is SPECIFICALLY rate limited to 1 per second - which is CLEARLY trying to tell you that it is NOT a scalable solution. I'm guessing it's a "free" tier that is so limited? Either way, if your intent is to scale then you need to find or pay for a scalable source.

@adrukh
Copy link
Author

adrukh commented Aug 16, 2021

Thanks. I'd love to get additional opinions here, will leave the issue open for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants