-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for other base LLM models #288
Comments
There are two ways to do this:
|
I think the answer is "both", and I have specific suggestions about how we could do so:
|
@neubig thanks for mentioning LiteLLM - i'm one of the maintainers of LiteLLM. Happy to make a PR for integrating😊. Will make one on the next 48 hrs |
Hey @ishaan-jaff ! Thank you so much for your comment. I had started working on this a day back. Happy to see you would like to contribute. Will you be integrating the LiteLLM part of the solution? Then I can focus on structuring the generation from huggingface models on top of that. Let me know, thanks! |
Pr here for tracking: #324 thanks @krrishdholakia! |
Looks like there's a merged PR for this ticket, is this still standing for the integration of recent base LLMs? |
Hey @bilal-aamer ! We merged in LiteLLM to take care of this. Should we close the issue @neubig ? |
Yes, I think this can be closed! |
Currently prompt2model only supports using OpenAI as the base LLM that is used for distillation, etc. It would be good to allow other models, including open models, as the base LLM.
One way we might be able to do this is by re-using general purpose inference code in libraries such as Zeno Build or litellm.
The text was updated successfully, but these errors were encountered: