Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OpenVINO export CLI #437

Merged
merged 4 commits into from
Sep 28, 2023
Merged

Add OpenVINO export CLI #437

merged 4 commits into from
Sep 28, 2023

Conversation

echarlaix
Copy link
Collaborator

@echarlaix echarlaix commented Sep 27, 2023

example :

optimum-cli export openvino --model distilbert-base-uncased-finetuned-sst-2-english distilbert_openvino

It could also make sense to add weight only quantization in a following PR, what do you think ? @helena-intel @AlexKoff88

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Sep 27, 2023

The documentation is not available anymore as the PR was closed or merged.

@echarlaix echarlaix marked this pull request as ready for review September 27, 2023 12:37
Copy link
Collaborator

@helena-intel helena-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @echarlaix I love this! I also really like that it exports the tokenizer too. Agreed that weight compression for CausalLM models would be great to add too. It would also be good to have an FP16 option. We could also consider to export to FP16 by default. This is what OpenVINO now does by default too when you convert/save a model, with an option to disable it.

The example worked without errors, but it exported the distilbert model as model.xml and .bin instead of openvino_model.xml and bin. And for gpt2 it exported decoder_model.xml/bin and decoder_model_with_past xml/bin.

@echarlaix
Copy link
Collaborator Author

Thanks @echarlaix I love this! I also really like that it exports the tokenizer too. Agreed that weight compression for CausalLM models would be great to add too. It would also be good to have an FP16 option. We could also consider to export to FP16 by default. This is what OpenVINO now does by default too when you convert/save a model, with an option to disable it.

Yes it would be good to have the FP16 support directly in the CLI as well I agree!

The example worked without errors, but it exported the distilbert model as model.xml and .bin instead of openvino_model.xml and bin. And for gpt2 it exported decoder_model.xml/bin and decoder_model_with_past xml/bin.

Yes, I also realized this and added the fix in main_export in #439. Thanks a lot for finding + reporting it!

@AlexKoff88
Copy link
Collaborator

example :

optimum-cli export openvino --model distilbert-base-uncased-finetuned-sst-2-english distilbert_openvino

It could also make sense to add weight only quantization in a following PR, what do you think ? @helena-intel @AlexKoff88

It totally makes sense.

@echarlaix echarlaix merged commit 25fc757 into main Sep 28, 2023
11 of 12 checks passed
@echarlaix echarlaix deleted the add-ov-cli branch September 28, 2023 08:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants