Finetuning LoRA only vs finetuning LLaMA #158
Unanswered
changchuming
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Has anyone quantified this - if we finetune LoRA on information that the LLaMA model itself has not seen, how well is the resulting combination (LLaMA + LoRA) able to understand and present this information vs if the base LLaMA model is trained on the data?
Beta Was this translation helpful? Give feedback.
All reactions