-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
微调模型之后出现乱码 #634
Comments
训练loss是否正常收敛?是否出现nan,或突然增大? |
因为训练涉及数据、策略的问题,不确定是否与这些相关,也不排除代码是否有bug。建议,首先直接用自己的数据做推理,看是否出现乱码;其次建议先拿小部分数据跑,看看是否出现乱码;如果显存允许,可以先全参数微调小部分数据看看是否乱码,排除参数合并的bug. |
看能否提供某条数据,我们本地看能否复现 |
推理时,可试试设置paddle.seed(0)固定随机种子试试 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
在自己的数据上对paddlemix/llava/llava-v1.6-vicuna-7b模型进行lora微调,训练完之后进行了参数合并,然后在推理的时候模型出现乱码。
The text was updated successfully, but these errors were encountered: