Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add handling for HTTP 429 (rate limiting) #92

Closed
wants to merge 1 commit into from
Closed

add handling for HTTP 429 (rate limiting) #92

wants to merge 1 commit into from

Conversation

wasinix
Copy link

@wasinix wasinix commented Mar 13, 2021

As of October 2020, trakt.tv started implementing rate limits.
If rate limit is exceeded, HTTP 429 status is returned.
additional information: trakt/api-help#220
related issue: ##86

As of October 2020, trakt.tv started implementing rate limits.
If rate limit is exceeded, HTTP 429 status is returned.
trakt/api-help#220
@coveralls
Copy link

Coverage Status

Coverage remained the same at 82.607% when pulling 6053759 on wasinix:develop into af75894 on fuzeman:develop.

4 similar comments
@coveralls
Copy link

Coverage Status

Coverage remained the same at 82.607% when pulling 6053759 on wasinix:develop into af75894 on fuzeman:develop.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 82.607% when pulling 6053759 on wasinix:develop into af75894 on fuzeman:develop.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 82.607% when pulling 6053759 on wasinix:develop into af75894 on fuzeman:develop.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 82.607% when pulling 6053759 on wasinix:develop into af75894 on fuzeman:develop.

@razzeee
Copy link
Contributor

razzeee commented Mar 13, 2021

Why would you want to retry, if you exceed the rate limit? That's only going to make it worse.

@wasinix
Copy link
Author

wasinix commented Mar 13, 2021

They have two kind of limits:
For GET 1000 calls every 5 minutes
For POST, PUT, DELETE 1 call per second.

The GET limit is something you are unlikely to hit.
But when syncing your history, the POST limit is easy to hit.
As it is 1 call/sec, a wait for some seconds and then retry works.
At least it did for me, when I had issues syncing my Kodi collection with trakt

@razzeee
Copy link
Contributor

razzeee commented Mar 13, 2021

hrm, we usually create chunks of elements in the kodi addon. do you know how many items you were sending?

I actually think you ran into an oversight. https://github.com/trakt/script.trakt/blob/master/resources/lib/syncEpisodes.py#L299

I probably used that to test, when I implemented the chunking for collections here trakt/script.trakt@2329cd0 but forgot to remove it (set it to 50 or more)

@fuzeman
Copy link
Owner

fuzeman commented Mar 19, 2021

I don't believe this would be the best way to handle 429 responses. While the GET limit is unlikely to be hit by most users, a rogue application that does hit this limit would amplify its requests to the Trakt API until max_retries is reached for each request.

Ideally, requests should be queued once a rate limit is hit and processed at the allowed rate for the endpoint. I'll take a look at this when I have some time available.

@fuzeman fuzeman closed this Mar 19, 2021
@wasinix
Copy link
Author

wasinix commented Mar 19, 2021

I agree, that it is not the best possible solution to deal with the request limit.
But it is a solution, that was working for me as a workaround and I believe that it is an improvement over not dealing in any way with the limit.

With the implementation as it was, I have not been able to sync the "seen" state with the script.trakt plugin (which makes use of trakt.py) from trakt.tv to a fresh set up Kodi with a bigger tv-show collection.
With my small change, as soon as you hit the limit and get 429 we wait several retry_sleep seconds (changed it for myself to 2 seconds), then we are beyond the limits again, retry again successfully and sync is working.
Cause of the sleeps, the sync is now kind of respecting the request limit and not hammering trakt.tv.

The "as is" implementation is not respecting the limit in any way and is sending fresh requests immediately again after hitting the limit, which will most likely fail too.

So, yes. It might be, that this should be solved in the script.trakt plugin, or that a queueing mechanism in trakt.py is the better solution.
But I am confident, that having the small workaround is better than nothing, until a more professional solution gets implemented.

Nevertheless I am thankful for your work and happy that I could raise awareness of the 429 issue.

@wasinix wasinix deleted the develop branch March 19, 2021 07:55
@razzeee
Copy link
Contributor

razzeee commented Mar 20, 2021

For future reference, I fixed the chunk size as a start trakt/script.trakt@5f23288

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants