-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Onnx working in Chainner and not in vsmlrt? #119
Comments
After viewing log, |
I’m reopening an issue. Is it somehow possible to use the vsmlrt addon with a BHWC ONNX model, instead of the standard BCHW? How can I enable the flexible output option (if that would help) during the creation of the TRT model? I’m using Hybrid. Thank you very much in advance. |
I changed the channel order from BHWC to BCHW and converted the input tensor from static to dynamic dimensions. However, can I get rid of this error? @WolframRhodium , any help is appreciated. Thanks a lot in advance. `[12/29/2024-20:33:39] [I] Set shape of input tensor input for optimization profile 0 to: MIN=1x3x240x320 OPT=1x3x240x320 MAX=1x3x240x320 [12/29/2024-20:33:39] [W] [TRT] Could not read timing cache from: C:/Users/Miki/AppData/Local/Temp\b62e5f38.engine.cache. A new timing cache will be generated and written. |
Hello. I have a problem when running a 1x model in vsmlrt. Specifically, the model works correctly in Chainner, but in the vsmlrt plugin, I get this error.
`
2024-12-23 12:14:30.482
Failed to evaluate the script:
Python exception: operator (): 'ortapi->CreateSessionFromArray( d->environment, std::data(onnx_data), std::size(onnx_data), session_options, &resource.session )' failed: Node (/G/down3_4/body/body.1/Reshape) Op (Reshape) [ShapeInferenceError] Dimension could not be inferred: incompatible shapes
Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 3365, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 3366, in vapoursynth._vpy_evaluate
File "C:\Users\Miki\AppData\Local\Temp\tempPreviewVapoursynthFile12_14_29_559.vpy", line 49, in
clip = vsmlrt.inference([clip],network_path="C:/Users/Miki/Desktop/hex/1x.onnx", backend=Backend.ORT_DML(fp16=False,device_id=0,num_streams=1)) # 1280x960
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2765, in inference
return inference_with_fallback(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2739, in inference_with_fallback
raise e
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2716, in inference_with_fallback
ret = _inference(
^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2466, in _inference
ret = core.ort.Model(
^^^^^^^^^^^^^^^
File "src\cython\vapoursynth.pyx", line 3101, in vapoursynth.Function.call
vapoursynth.Error: operator (): 'ortapi->CreateSessionFromArray( d->environment, std::data(onnx_data), std::size(onnx_data), session_options, &resource.session )' failed: Node (/G/down3_4/body/body.1/Reshape) Op (Reshape) [ShapeInferenceError] Dimension could not be inferred: incompatible shapes
`
Additionally, I have a problem with a 2x model because it only works in DML mode and fails to convert to TensorRT.
`TRTEXEC 2x conversion error:
2024-12-23 12:16:50.315
Failed to evaluate the script:
Python exception: trtexec execution fails, log has been written to C:\Users\Miki\AppData\Local\Temp\trtexec_241223_121645.log
Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 3365, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 3366, in vapoursynth._vpy_evaluate
File "C:\Users\Miki\AppData\Local\Temp\tempPreviewVapoursynthFile12_16_45_329.vpy", line 43, in
clip = vsmlrt.inference([clip],network_path="C:/Users/Miki/Desktop/hex/RADI/2x.onnx", backend=Backend.TRT(fp16=False,device_id=0,num_streams=1,verbose=True,use_cuda_graph=True,workspace=1073741824,builder_optimization_level=3,engine_folder="C:/Users/Miki/AppData/Local/Temp")) # 1280x960
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2765, in inference
return inference_with_fallback(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2739, in inference_with_fallback
raise e
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2716, in inference_with_fallback
ret = _inference(
^^^^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2582, in _inference
engine_path = trtexec(
^^^^^^^^
File "C:/Program Files/Hybrid/64bit/vs-mlrt/vsmlrt.py", line 2138, in trtexec
raise RuntimeError(f"trtexec execution fails, log has been written to {log_filename}")
RuntimeError: trtexec execution fails, log has been written to C:\Users\Miki\AppData\Local\Temp\trtexec_241223_121645.log
`
Based on these errors, do you have any idea what the issue might be? These models are commercial, so I would prefer not to share them publicly. If you'd like, you can leave your email address, and I can send them to you. Thank you.
The text was updated successfully, but these errors were encountered: