-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding support for Anthropic, Cohere, TogetherAI, Aleph Alpha, Huggingface Inference Endpoints, etc. #324
Conversation
…gface Inference Endpoints, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the contribution! The overall idea of expanding our coverage is great of course, but we'll have to hash out the details.
- We'll need to document about how users can use other API-based services, and also how to set the API keys appropriately. Here are the places where we use or mention API keys: https://github.com/search?q=repo%3Aneulab%2Fprompt2model%20openai_api_key&type=code
- Currently CI is failing, partly on linting, and partly on tests. These will have to be fixed. I'd suggest following the contributing directions here to set up pre-commit checks, which should fix the linting stuff automatically: https://github.com/neulab/prompt2model/blob/main/CONTRIBUTING.md#contribution-guide
If any of the next steps aren't clear we'll be happy to help clarify things, and thanks again!
Tagging @saum7800 who is currently working on this as well! |
Thank you for the PR @krrishdholakia ! Excited to see contribution from you. Feel free to join discord and the open-source-llms channel where we are discussing how to make this change. A few more things to consider in addition to what @neubig mentioned:
|
Hey @saum7800, Thanks for the feedback.
Happy to make any changes, increase coverage on our end as required. |
Can confirm this works for openai. Will run through testing on other models/providers now. |
Tested on anthropic, can confirm this works when you set 'ANTHROPIC_API_KEY' in the .env |
Regarding Errors: We currently provide support for 3 exceptions across all providers (here's our implementation logic - https://github.com/BerriAI/litellm/blob/cc4be8dd73497b02f4a9443b12c514737a657cae/litellm/utils.py#L1420)
I'll add coverage for the remaining 3, tomorrow:
After reviewing the relevant http status codes + docs, as well as running testing on our end. |
@neubig There is no change in the way API keys need to be set, unless I'm missing something? However, it would make sense to add documentation explaining how users can use different providers. I'm happy to add that too. |
Yes, could you please add just a brief mention and possibly a link to the litellm doc regarding API keys? No need to list all of them on prompt2model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, thank you for the contribution!
Just a note that we have an issue with a binary file being checked in that we should not, so I reverted this file: #329
Other than that looks good to go.
Awesome, thank you for the contribution @krrishdholakia ! |
Hi @neubig
Thanks for your comment on another PR. I came across prompt2model and would love to help out if possible using LiteLLM (https://github.com/BerriAI/litellm).
I added support for the providers listed above by replacing the chatcompletion w/ completion and chatcompletion.acreate with acompletion. The code is pretty similar to the OpenAI class - as litellm follows the same pattern as the openai-python sdk.
Would love to know if this helps.
Happy to add additional tests / update documentation, if the initial PR looks good to you.