Skip to content
This repository has been archived by the owner on Mar 1, 2023. It is now read-only.

Sometimes a return prompt from RAVEN appears to to exceed the beam width of GPT-3 #1

Open
Adrian-1234 opened this issue Feb 26, 2023 · 0 comments

Comments

@Adrian-1234
Copy link

I receive the occasional error:

GPT3 error: This model's maximum context length is 4097 tokens, however you requested 5513 tokens (4513 in your prompt; 1000 for the completion). Please reduce your prompt; or completion length.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant