Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/web-scrape consumes ALL our embedding tokens which could break core chat functionality for users #24

Open
KastanDay opened this issue Jul 26, 2023 · 0 comments
Assignees
Labels

Comments

@KastanDay
Copy link
Member

Fundamentally we need to adjust this loop to be token aware or something... and not exceed more than like 200k per min because we could have multiple people uploading at once.

Really... we need a global database that keeps track of our token user per minute so we can rate limit ourselves and not break our service...

2023-07-26 18:18:50,974:INFO - error_code=rate_limit_exceeded error_message='Rate limit reached for default-text-embedding-ada-002 in organization org-UBbqlRTzKdhq7mpz97JI0deV on tokens per min. Limit: 1000000 / min. Current: 956531 / min. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens message='OpenAI API error received' stream_error=False
2023-07-26 18:18:50,976:WARNING - Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-UBbqlRTzKdhq7mpz97JI0deV on tokens per min. Limit: 1000000 / min. Current: 956531 / min. Contact us through our help center at help.openai.com if you continue to have issues..
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants