Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behavior with LLaMA 3.1 8b instruct model after reinstallation of unsloth and libraries #1588

Open
andrijafactoryww opened this issue Jan 28, 2025 · 1 comment

Comments

@andrijafactoryww
Copy link

I am using the unsloth library. The last time I installed unsloth was on December 7, 2024. On the same commit, I trained a LoRA adapter, and everything worked fine.

However, my hard drive failed, and I had to reinstall all libraries. After reinstallation, the same model with the same LoRA adapter produces completely different outputs. For example, the responses are only:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

I am using llama3.1 8b instruct in 4-bit mode. When I try to load the previously downloaded llama3.1 8b instruct model, I encounter the following error:

File "/root/miniconda/envs/unsloth_env/lib/python3.10/site-packages/bitsandbytes/utils.py", line 197, in unpack_tensor_to_dict json_str = json_bytes.decode("utf-8") UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf2 in position 8: invalid continuation byte

I suspect there might be a version mismatch or corruption in the downloaded model files. Could you please help me debug this issue?

@danielhanchen
Copy link
Contributor

Could you try uninstalling bitsandbytes and reinstalling it? pip uninstall bitsandbytes pip install bitsandbytes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants