-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Trying to export a qwen2-vl model, that is a custom or unsupported architecture, but no custom export configuration was passed as custom_export_configs
#1137
Comments
@azhuvath, which optimum-intel version do you use? please try to update it according to suggestion from: |
It works now. Should I create a different issue for Qwen2.5 as opposed to Qwen2VL (ov_env) PS C:\Users\devcloud\Desktop\Scribbler\Qwen2VL> optimum-cli export openvino --model "Qwen/Qwen2.5-VL-7B-Instruct" --trust-remote-code --weight-format int4 "Qwen2.5-VL-7B-Instruct-OV-INT4" During handling of the above exception, another exception occurred: Traceback (most recent call last): |
@azhuvath this issue shows that your installed transformers version does not support qwen2.5vl model yet |
@eaidova I have tried latest Transformers with the following command
It looks like the optimum-intel package is not compatible with the latest transformers because i have below error when the latest transformers package is installed
|
meet the same error, is there a quick method to convert qwen2.5-vl model to openvino form via optimum? |
Trying to quantize the Qwen model using the below command.
optimum-cli export openvino --model "Qwen/Qwen2-VL-2B-Instruct" --trust-remote-code --weight-format fp16 "Qwen2-VL-2B-Instruct-OV-FP16"
Getting the following error. Do I need to use a different version of optimum-intel[openvino]?
(ov_env) PS C:\Users\devcloud\Desktop\Scribbler\Qwen2VL> optimum-cli export openvino --model "Qwen/Qwen2-VL-2B-Instruct" --trust-remote-code --weight-format fp16 "Q
wen2-VL-2B-Instruct-OV-FP16"
C:\Python310\lib\importlib\util.py:247: DeprecationWarning: The
openvino.runtime
module is deprecated and will be removed in the 2026.0 release. Please replaceopenvino.runtime
withopenvino
.self.spec.loader.exec_module(self)
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\Scripts\optimum-cli.exe_main.py", line 7, in
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\commands\optimum_cli.py", line 208, in main
service.run()
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\commands\export\openvino.py", line 390, in run
main_export(
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\exporters\openvino_main.py", line 255, in main_export
raise ValueError(
ValueError: Trying to export a qwen2-vl model, that is a custom or unsupported architecture, but no custom export configuration was passed as
custom_export_configs
. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum-intel/issues if you would like the model type qwen2-vl to be supported natively in the OpenVINO export.The text was updated successfully, but these errors were encountered: