You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,big guys!
I would like to ask if llama.cpp is able to convert a multimodal model (e.g. Qwen2.5-vl-3B) to gguf format and quantize it to Q4_0? And is there a corresponding tool to run it through llama.cpp after compilation?
If not, are there any subsequent plans for this part of the development?
The text was updated successfully, but these errors were encountered:
Hi,big guys!
I would like to ask if llama.cpp is able to convert a multimodal model (e.g. Qwen2.5-vl-3B) to gguf format and quantize it to Q4_0? And is there a corresponding tool to run it through llama.cpp after compilation?
If not, are there any subsequent plans for this part of the development?
The text was updated successfully, but these errors were encountered: