-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: GPT4Generator
#5744
feat: GPT4Generator
#5744
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, it looks good to me.
I am only a bit worried about the e2e test error:
openai.error.RateLimitError: Rate limit reached for default-gpt-4 ... on tokens per min. Limit: 40000 / min.
test/preview/components/generators/openai/test_gpt4_generator.py
Outdated
Show resolved
Hide resolved
Co-authored-by: Stefano Fiorucci <[email protected]>
@anakin87 I tried to lower the amount of tokens sent, but I can't tell if that will help or if OpenAI is just unreliable today. On my local machine it fails once every 3-4 executions, on CI seems to fail a lot more often. And the error message makes no sense because the rate is always much lower than the threshold 😅 |
Pull Request Test Coverage Report for Build 6162263615
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Related Issues
Proposed Changes:
GPT4Generator
, a small subclass ofGPT35Generator
that sets a different default model.GPT35Generator
's e2e tests to also test the subclassHow did you test it?
Local tests run
Notes for the reviewer
Checklist
fix:
,feat:
,build:
,chore:
,ci:
,docs:
,style:
,refactor:
,perf:
,test:
.