Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Trying to export a qwen2-vl model, that is a custom or unsupported architecture, but no custom export configuration was passed as custom_export_configs #1137

Open
azhuvath opened this issue Jan 31, 2025 · 5 comments · May be fixed by #1163

Comments

@azhuvath
Copy link

Trying to quantize the Qwen model using the below command.

optimum-cli export openvino --model "Qwen/Qwen2-VL-2B-Instruct" --trust-remote-code --weight-format fp16 "Qwen2-VL-2B-Instruct-OV-FP16"

Getting the following error. Do I need to use a different version of optimum-intel[openvino]?

(ov_env) PS C:\Users\devcloud\Desktop\Scribbler\Qwen2VL> optimum-cli export openvino --model "Qwen/Qwen2-VL-2B-Instruct" --trust-remote-code --weight-format fp16 "Q
wen2-VL-2B-Instruct-OV-FP16"
C:\Python310\lib\importlib\util.py:247: DeprecationWarning: The openvino.runtime module is deprecated and will be removed in the 2026.0 release. Please replace openvino.runtime with openvino.
self.spec.loader.exec_module(self)
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\Scripts\optimum-cli.exe_main
.py", line 7, in
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\commands\optimum_cli.py", line 208, in main
service.run()
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\commands\export\openvino.py", line 390, in run
main_export(
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\exporters\openvino_main
.py", line 255, in main_export
raise ValueError(
ValueError: Trying to export a qwen2-vl model, that is a custom or unsupported architecture, but no custom export configuration was passed as custom_export_configs. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum-intel/issues if you would like the model type qwen2-vl to be supported natively in the OpenVINO export.

@eaidova
Copy link
Collaborator

eaidova commented Feb 3, 2025

@azhuvath, which optimum-intel version do you use? please try to update it according to suggestion from:
openvinotoolkit/openvino.genai#1656 (comment)

@azhuvath
Copy link
Author

azhuvath commented Feb 3, 2025

@azhuvath, which optimum-intel version do you use? please try to update it according to suggestion from: openvinotoolkit/openvino.genai#1656 (comment)

It works now.

Should I create a different issue for Qwen2.5 as opposed to Qwen2VL
optimum-cli export openvino --model "Qwen/Qwen2.5-VL-7B-Instruct" --trust-remote-code --weight-format int4 "Qwen2.5-VL-7B-Instruct-OV-INT4"

(ov_env) PS C:\Users\devcloud\Desktop\Scribbler\Qwen2VL> optimum-cli export openvino --model "Qwen/Qwen2.5-VL-7B-Instruct" --trust-remote-code --weight-format int4 "Qwen2.5-VL-7B-Instruct-OV-INT4"
C:\Python310\lib\importlib\util.py:247: DeprecationWarning: The openvino.runtime module is deprecated and will be removed in the 2026.0 release. Please replace openvino.runtime with openvino.
self.spec.loader.exec_module(self)
config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.37k/1.37k [00:00<?, ?B/s]
Traceback (most recent call last):
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1034, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 736, in getitem
raise KeyError(key)
KeyError: 'qwen2_5_vl'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\Scripts\optimum-cli.exe_main
.py", line 7, in
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\commands\optimum_cli.py", line 208, in main
service.run()
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\commands\export\openvino.py", line 454, in run
model = model_cls.from_pretrained(
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\intel\openvino\modeling_base.py", line 469, in from_pretrained
return super().from_pretrained(
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\modeling_base.py", line 416, in from_pretrained
config = cls._load_config(
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\optimum\modeling_base.py", line 247, in _load_config
config = AutoConfig.from_pretrained(
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1036, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type qwen2_5_vl but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

@eaidova
Copy link
Collaborator

eaidova commented Feb 3, 2025

@azhuvath this issue shows that your installed transformers version does not support qwen2.5vl model yet

@nvsreerag
Copy link

@eaidova I have tried latest Transformers with the following command pip install git+https://github.com/huggingface/transformers which have support for qwen2.5vl model. But iam getting the below error which is same as @azhuvath reported.

(qwen_env) C:\Users\snvx\Environments>optimum-cli export openvino --model Qwen/Qwen2.5-VL-3B-Instruct Qwen2.5-VL-3B-Instruct\INT4 --weight-format int4
C:\Users\snvx\AppData\Local\miniforge3\lib\importlib\util.py:247: DeprecationWarning: The `openvino.runtime` module is deprecated and will be removed in the 2026.0 release. Please replace `openvino.runtime` with `openvino`.
  self.__spec__.loader.exec_module(self)
Traceback (most recent call last):
  File "C:\Users\snvx\AppData\Local\miniforge3\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\snvx\AppData\Local\miniforge3\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\snvx\Environments\qwen_env\Scripts\optimum-cli.exe\__main__.py", line 7, in <module>
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\commands\optimum_cli.py", line 208, in main
    service.run()
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\commands\export\openvino.py", line 454, in run
    model = model_cls.from_pretrained(
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\intel\openvino\modeling_base.py", line 469, in from_pretrained
    return super().from_pretrained(
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\intel\openvino\modeling_visual_language.py", line 620, in _from_transformers
    main_export(
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\exporters\openvino\__main__.py", line 264, in main_export
    raise ValueError(
ValueError: Trying to export a qwen2-5-vl model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum-intel/issues if you would like the model type qwen2-5-vl to be supported natively in the OpenVINO export.

It looks like the optimum-intel package is not compatible with the latest transformers because i have below error when the latest transformers package is installed

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
optimum-intel 1.22.0.dev0+f601b8b requires transformers<4.49,>=4.36, but you have transformers 4.49.0.dev0 which is incompatible.

@mnoiy
Copy link

mnoiy commented Feb 13, 2025

@eaidova I have tried latest Transformers with the following command pip install git+https://github.com/huggingface/transformers which have support for qwen2.5vl model. But iam getting the below error which is same as @azhuvath reported.

(qwen_env) C:\Users\snvx\Environments>optimum-cli export openvino --model Qwen/Qwen2.5-VL-3B-Instruct Qwen2.5-VL-3B-Instruct\INT4 --weight-format int4
C:\Users\snvx\AppData\Local\miniforge3\lib\importlib\util.py:247: DeprecationWarning: The `openvino.runtime` module is deprecated and will be removed in the 2026.0 release. Please replace `openvino.runtime` with `openvino`.
  self.__spec__.loader.exec_module(self)
Traceback (most recent call last):
  File "C:\Users\snvx\AppData\Local\miniforge3\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\snvx\AppData\Local\miniforge3\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\snvx\Environments\qwen_env\Scripts\optimum-cli.exe\__main__.py", line 7, in <module>
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\commands\optimum_cli.py", line 208, in main
    service.run()
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\commands\export\openvino.py", line 454, in run
    model = model_cls.from_pretrained(
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\intel\openvino\modeling_base.py", line 469, in from_pretrained
    return super().from_pretrained(
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\intel\openvino\modeling_visual_language.py", line 620, in _from_transformers
    main_export(
  File "C:\Users\snvx\Environments\qwen_env\lib\site-packages\optimum\exporters\openvino\__main__.py", line 264, in main_export
    raise ValueError(
ValueError: Trying to export a qwen2-5-vl model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum-intel/issues if you would like the model type qwen2-5-vl to be supported natively in the OpenVINO export.

It looks like the optimum-intel package is not compatible with the latest transformers because i have below error when the latest transformers package is installed

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
optimum-intel 1.22.0.dev0+f601b8b requires transformers<4.49,>=4.36, but you have transformers 4.49.0.dev0 which is incompatible.

meet the same error, is there a quick method to convert qwen2.5-vl model to openvino form via optimum?

@eaidova eaidova linked a pull request Feb 17, 2025 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants