Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Onnx working in Chainner and not in vsmlrt? #119

Open
zelenooki87 opened this issue Dec 23, 2024 · 3 comments
Open

Onnx working in Chainner and not in vsmlrt? #119

zelenooki87 opened this issue Dec 23, 2024 · 3 comments

Comments

@zelenooki87
Copy link

zelenooki87 commented Dec 23, 2024

Hello. I have a problem when running a 1x model in vsmlrt. Specifically, the model works correctly in Chainner, but in the vsmlrt plugin, I get this error.
`
2024-12-23 12:14:30.482
Failed to evaluate the script:
Python exception: operator (): 'ortapi->CreateSessionFromArray( d->environment, std::data(onnx_data), std::size(onnx_data), session_options, &resource.session )' failed: Node (/G/down3_4/body/body.1/Reshape) Op (Reshape) [ShapeInferenceError] Dimension could not be inferred: incompatible shapes

Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 3365, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 3366, in vapoursynth._vpy_evaluate
File "C:\Users\Miki\AppData\Local\Temp\tempPreviewVapoursynthFile12_14_29_559.vpy", line 49, in
clip = vsmlrt.inference([clip],network_path="C:/Users/Miki/Desktop/hex/1x.onnx", backend=Backend.ORT_DML(fp16=False,device_id=0,num_streams=1)) # 1280x960
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2765, in inference
return inference_with_fallback(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2739, in inference_with_fallback
raise e
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2716, in inference_with_fallback
ret = _inference(
^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2466, in _inference
ret = core.ort.Model(
^^^^^^^^^^^^^^^
File "src\cython\vapoursynth.pyx", line 3101, in vapoursynth.Function.call
vapoursynth.Error: operator (): 'ortapi->CreateSessionFromArray( d->environment, std::data(onnx_data), std::size(onnx_data), session_options, &resource.session )' failed: Node (/G/down3_4/body/body.1/Reshape) Op (Reshape) [ShapeInferenceError] Dimension could not be inferred: incompatible shapes
`

Additionally, I have a problem with a 2x model because it only works in DML mode and fails to convert to TensorRT.
`TRTEXEC 2x conversion error:

2024-12-23 12:16:50.315
Failed to evaluate the script:
Python exception: trtexec execution fails, log has been written to C:\Users\Miki\AppData\Local\Temp\trtexec_241223_121645.log

Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 3365, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 3366, in vapoursynth._vpy_evaluate
File "C:\Users\Miki\AppData\Local\Temp\tempPreviewVapoursynthFile12_16_45_329.vpy", line 43, in
clip = vsmlrt.inference([clip],network_path="C:/Users/Miki/Desktop/hex/RADI/2x.onnx", backend=Backend.TRT(fp16=False,device_id=0,num_streams=1,verbose=True,use_cuda_graph=True,workspace=1073741824,builder_optimization_level=3,engine_folder="C:/Users/Miki/AppData/Local/Temp")) # 1280x960
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2765, in inference
return inference_with_fallback(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2739, in inference_with_fallback
raise e
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2716, in inference_with_fallback
ret = _inference(
^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2582, in _inference
engine_path = trtexec(
^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2138, in trtexec
raise RuntimeError(f"trtexec execution fails, log has been written to {log_filename}")
RuntimeError: trtexec execution fails, log has been written to C:\Users\Miki\AppData\Local\Temp\trtexec_241223_121645.log

`

Based on these errors, do you have any idea what the issue might be? These models are commercial, so I would prefer not to share them publicly. If you'd like, you can leave your email address, and I can send them to you. Thank you.

@zelenooki87
Copy link
Author

After viewing log,
I changed the input tensors to "input" and "output," and it worked.

@zelenooki87
Copy link
Author

I’m reopening an issue. Is it somehow possible to use the vsmlrt addon with a BHWC ONNX model, instead of the standard BCHW? How can I enable the flexible output option (if that would help) during the creation of the TRT model? I’m using Hybrid. Thank you very much in advance.

@zelenooki87 zelenooki87 reopened this Dec 28, 2024
@zelenooki87
Copy link
Author

I changed the channel order from BHWC to BCHW and converted the input tensor from static to dynamic dimensions. However, can I get rid of this error? @WolframRhodium , any help is appreciated. Thanks a lot in advance.

`[12/29/2024-20:33:39] [I] Set shape of input tensor input for optimization profile 0 to: MIN=1x3x240x320 OPT=1x3x240x320 MAX=1x3x240x320
[12/29/2024-20:33:39] [V] [TRT] Trying to set exclusive file lock C:/Users/Miki/AppData/Local/Temp\b62e5f38.engine.cache.lock

[12/29/2024-20:33:39] [W] [TRT] Could not read timing cache from: C:/Users/Miki/AppData/Local/Temp\b62e5f38.engine.cache. A new timing cache will be generated and written.
[12/29/2024-20:33:39] [E] Error[4]: IBuilder::buildSerializedNetwork: Error Code 4: API Usage Error (Dimension mismatch for tensor input and profile 0. At dimension axis 2, profile has min=240, opt=240, max=240 but tensor has 192.)
[12/29/2024-20:33:39] [E] Engine could not be created from network
[12/29/2024-20:33:39] [E] Building engine failed
[12/29/2024-20:33:39] [E] Failed to create engine from model or file.
[12/29/2024-20:33:39] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v100600] [b26] # C:/Program Files/Hybrid/64bit/vs-mlrt\vsmlrt-cuda\trtexec --onnx=C:/Users/Miki/Pictures/KONVERZIJA/modified_GPv2-4x-fp16-00-00-00_BCHW100%.onnx --timingCacheFile=C:/Users/Miki/AppData/Local/Temp\b62e5f38.engine.cache --device=0 --saveEngine=C:/Users/Miki/AppData/Local/Temp\b62e5f38.engine --memPoolSize=workspace:1073741824 --shapes=input:1x3x240x320 --fp16 --verbose --tacticSources=-CUBLAS,-CUBLAS_LT,-CUDNN,+EDGE_MASK_CONVOLUTIONS,+JIT_CONVOLUTIONS --useCudaGraph --noDataTransfers --noTF32 --inputIOFormats=fp16:chw --outputIOFormats=fp32:chw --builderOptimizationLevel=3
`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
@zelenooki87 and others