Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: Need models in onnx and .pt format (not just .bin and .config) #38

Open
311-code opened this issue Feb 5, 2024 · 6 comments
Open

Comments

@311-code
Copy link

311-code commented Feb 5, 2024

I was requesting the model in another format because I cannot convert it without the proper model configuration file (I've tried) Need them in onnx or .pt format specifically for a Unity application called Depthviewer. Can we make this happen?

Here is a list of models and their formats available, as you can see depth-anything has onnx, I was hoping marigold could profile this also https://airtable.com/appjWiS91OlaXXtf0/shrchKmROzpsq0HFw/tblviBOLphAw5Befd

@julienkay
Copy link

I converted Marigold to ONNX and uploaded to HF in case it's useful to you: https://huggingface.co/julienkay/Marigold

@alex-seville
Copy link

would love to have it in CoreML too!

@toshas
Copy link
Collaborator

toshas commented Jun 11, 2024

Please share scripts to automate these steps here, we will consider including these harness bits into the repository at a later stage.

@toshas
Copy link
Collaborator

toshas commented Jun 11, 2024

@julienkay I see the folder structure is different: the VAE encoder and decoder were moved to the top level. Is there a way to keep the original structure? If this is done intentionally, how is this onnx checkpoint used?

@julienkay
Copy link

Essentially I've just used the conversion script from diffusers.

Here is the code (probably not all packages required)

!pip install wheel wget
!pip install git+https://github.com/huggingface/diffusers.git
!pip install transformers onnxruntime onnx torch ftfy spacy scipy accelerate
!pip install onnxruntime-directml --force-reinstall
!pip install protobuf==3.20.2
!python -m wget https://raw.githubusercontent.com/huggingface/diffusers/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py -o convert_stable_diffusion_checkpoint_to_onnx.py
!mkdir model

!python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="Bingxin/Marigold" --output_path="model/marigold_onnx"

@julienkay
Copy link

julienkay commented Jun 11, 2024

@julienkay I see the folder structure is different: the VAE encoder and decoder were moved to the top level. Is there a way to keep the original structure? If this is done intentionally, how is this onnx checkpoint used?

Afaik, separate encoder/decoder seems to be the "standard" way for onnx based pipelines in diffusers.
Same as the OP my interest was mostly to use the onnx checkpoints in another inference framework (in this case Unity Sentis). I guess using them in python would require adding a separate onnx-specific pipeline like the ones found in diffusers/optimum for the most common pipelines like SD 1.5 / XL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants