-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can we use Llama2 here? #10
Comments
Hello @shivprasad94, sorry for the late reply and thanks for reaching out! TabLLM is LLM agnostic, so you can use whatever LLM you want. For instance, to use another HuggingFace model you could create a new json config in You can then use this model configuration in the run configuration few-shot-pretrained-100k.sh in line 18 as Let us know if you need any further help! |
There seems to be something wrong with t-few when finetune since LLaMA is not an EncoderDecoder model |
Hello @RyanJJP, thanks for this additional comment. You are right, t-few might not work with LLaMA. However, other fine-tuning methods for LLaMA (e.g. QLoRA) should allow a similar functionality. This would require larger changes to the code basis but conceptually it should be similar. |
I see from the code repo that we are using OpenAI APIs, how can we make this work for Open source models like Llama2?
can someone give me detail on this and what steps I need to follow?
The text was updated successfully, but these errors were encountered: