Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for Anthropic, Cohere, TogetherAI, Aleph Alpha, Huggingface Inference Endpoints, etc. #324

Merged
merged 6 commits into from
Sep 6, 2023

Conversation

krrishdholakia
Copy link
Contributor

@krrishdholakia krrishdholakia commented Sep 5, 2023

Hi @neubig

Thanks for your comment on another PR. I came across prompt2model and would love to help out if possible using LiteLLM (https://github.com/BerriAI/litellm).

I added support for the providers listed above by replacing the chatcompletion w/ completion and chatcompletion.acreate with acompletion. The code is pretty similar to the OpenAI class - as litellm follows the same pattern as the openai-python sdk.

Would love to know if this helps.

Happy to add additional tests / update documentation, if the initial PR looks good to you.

Copy link
Collaborator

@neubig neubig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the contribution! The overall idea of expanding our coverage is great of course, but we'll have to hash out the details.

  1. We'll need to document about how users can use other API-based services, and also how to set the API keys appropriately. Here are the places where we use or mention API keys: https://github.com/search?q=repo%3Aneulab%2Fprompt2model%20openai_api_key&type=code
  2. Currently CI is failing, partly on linting, and partly on tests. These will have to be fixed. I'd suggest following the contributing directions here to set up pre-commit checks, which should fix the linting stuff automatically: https://github.com/neulab/prompt2model/blob/main/CONTRIBUTING.md#contribution-guide

If any of the next steps aren't clear we'll be happy to help clarify things, and thanks again!

@viswavi
Copy link
Collaborator

viswavi commented Sep 5, 2023

Tagging @saum7800 who is currently working on this as well!

@saum7800
Copy link
Collaborator

saum7800 commented Sep 5, 2023

Thank you for the PR @krrishdholakia ! Excited to see contribution from you. Feel free to join discord and the open-source-llms channel where we are discussing how to make this change. A few more things to consider in addition to what @neubig mentioned:

  1. Your current changes would be from the ChatGPT agent, whereas we should probably have a unified interface from which the generation for specified model can happen.

  2. The instruction parser, model_retriever, and dataset_generator files are also handling exceptions specific to OpenAI. I am not sure how your code changes might react to error codes from different base LLMs.

@krrishdholakia
Copy link
Contributor Author

Hey @saum7800,

Thanks for the feedback.

Happy to make any changes, increase coverage on our end as required.

@krrishdholakia
Copy link
Contributor Author

Can confirm this works for openai. Will run through testing on other models/providers now.

@krrishdholakia
Copy link
Contributor Author

Tested on anthropic, can confirm this works when you set 'ANTHROPIC_API_KEY' in the .env

@krrishdholakia
Copy link
Contributor Author

Regarding Errors:

We currently provide support for 3 exceptions across all providers (here's our implementation logic - https://github.com/BerriAI/litellm/blob/cc4be8dd73497b02f4a9443b12c514737a657cae/litellm/utils.py#L1420)

  • InvalidRequestError
  • RateLimitError
  • ServiceUnavailableError

I'll add coverage for the remaining 3, tomorrow:

  • APIConnectionError
  • Timeout
  • APIError

After reviewing the relevant http status codes + docs, as well as running testing on our end.

@krrishdholakia
Copy link
Contributor Author

@neubig There is no change in the way API keys need to be set, unless I'm missing something?

However, it would make sense to add documentation explaining how users can use different providers. I'm happy to add that too.

@neubig
Copy link
Collaborator

neubig commented Sep 6, 2023

Yes, could you please add just a brief mention and possibly a link to the litellm doc regarding API keys? No need to list all of them on prompt2model

@krrishdholakia
Copy link
Contributor Author

Hi @saum7800 @neubig,

Support for all openai exceptions is now added (if we don't have an exact match, it defaults to the OpenAI APIError type).

I've also updated the readme with a link to the list of supported providers.

Please let me know if there's anything else required for this PR

@neubig neubig requested a review from saum7800 September 6, 2023 19:46
Copy link
Collaborator

@neubig neubig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, thank you for the contribution!

Just a note that we have an issue with a binary file being checked in that we should not, so I reverted this file: #329

Other than that looks good to go.

@neubig neubig enabled auto-merge (squash) September 6, 2023 22:12
@neubig neubig merged commit e368960 into neulab:main Sep 6, 2023
8 checks passed
@saum7800
Copy link
Collaborator

saum7800 commented Sep 7, 2023

Awesome, thank you for the contribution @krrishdholakia !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants