QLoRA加载基础模型后是否需要采用prepare_model_for_kbit_training()函数包裹? #6928
Unanswered
xiaobingbuhuitou
asked this question in
Q&A
Replies: 1 comment 1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
我想要加载原始使用float32预训练的模型作为基础模型并进行训练,加载时配置了
后续是否需要采用prepare_model_for_kbit_training()进行包裹😰?貌似包裹了之后会使得bnb_4bit_compute_dtype失效会变为float32。
Beta Was this translation helpful? Give feedback.
All reactions