-
Notifications
You must be signed in to change notification settings - Fork 2
wnen I run python onnx2trt.py,it occurs a mistake #1
Comments
Hi @xuhui1994 , I have noted that infer code using TRT are currently return wrong output. I don't have time to debug it right away, but will do it soon. Thanks. |
yes, what you say above is right. How can I get a correct ouput when run infer_trt.py ?Can you have a good solution? |
Yes, I have worked on it a lot, a problem is in the inference step on GPU (load RT engine, host<->device data moving...). I know that because when I using the same RT engine converted from this to put in Triton Server's Model Repository, it works well. |
@xuhui1994 TensorRT also update new lib called Torch-TensorRT that flexible build and load engine right in python code, you can check this out, I'm gonna update the repo with this method soon... https://developer.nvidia.com/blog/accelerating-inference-up-to-6x-faster-in-pytorch-with-torch-tensorrt/ |
Ok , I try my best to make it. Thank you for reply.
…------------------ 原始邮件 ------------------
发件人: "k9ele7en/ONNX-TensorRT-Inference-CRAFT-pytorch" ***@***.***>;
发送时间: 2021年12月17日(星期五) 下午2:54
***@***.***>;
***@***.******@***.***>;
主题: Re: [k9ele7en/ONNX-TensorRT-Inference-CRAFT-pytorch] wnen I run python onnx2trt.py,it occurs a mistake (Issue #1)
Yes, I have worked on it a lot, a problem is in the inference step on GPU (load RT engine, host<->device data moving...). I know that because when I using the same RT engine converted from this to put in Triton Server's Model Repository, it works well.
It would be nice if you can have a look at Tensor RT inference example and debug the code. If you have installed TensorRT, example are in path: /usr/src/tensorrt/samples/. If the code run well, you can post a PR so that people can find useful later. Thanks @xuhui1994
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
The text was updated successfully, but these errors were encountered: