This repository has been archived by the owner on Dec 6, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 49
Does it support Qwen1.5 Model? #78
Comments
同问,llama.cpp的转换脚本好像不能正常转化 |
可以试试ollama,一条命令即可体验。 |
你测试的效果如何?我用ollama测试qwen1.5-0.5b,效果极差 |
0.5b太小了。目前要做到效果好,很难的 |
老哥解决了吗 |
huggingface 上有 gguf 的版本,可以直接用。 |
如果是自己微调过的版本,怎么转换gguf呢 |
老哥,自己微调过的模型转换成gguf的问题解决了吗? |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
No description provided.
The text was updated successfully, but these errors were encountered: