It's a sample for onnxparser working with trt user defined plugins for TRT7.1. It implements grid sample op in torch introduced in this paper
This complemetary sample works like a collection of purposes. It would like to demonstrate:
- How to export a custom op from torch via onnx
- How to parse an onnx model which contains TensorRT custom op into TensorRT using onnxparser
- How to enable dynamic shapes during this process
- How to write plugin supporting explicit batch and dynamic shape for this process
- How to write the python script to do so
This project depends on:
- TensorRT 7.1
- CUDA 10+
- TensorRT OSS project
(If below instructions cannot guide to run the sample successfully, please checkout commit fc251f67cb56241daf30e7b7fb4b7be02c8d07e8)
Follow instructions on TensorRT OSS project to prepare all env requirements. Make sure you can build TensorRT OSS project and run the sample.
In order to build the project successfully, remember to checkout release 7.1 or commit 30bb96724c90ba5d88cfcf6809f4cfcad86c32af of TensorRT OSS and TensorRT/parser/onnx.
You can choose to prepare this based on NGC docker image tensorrt 20.08
Once you being able to run the built samples, you can patch files here into the TensorRT OSS project
- copy TensorRT into TensorRT
copy symbolic_opset10.py to /path/to/python/dist-packages/torch/onnx/, say within ngc container, it's /user/local/lib/python3.6/dist-packages/torch/onnx/
After the pacthing, you should cd into TensorRT/parsers/onnx, follow instructions to build onnx parser, link
Once built onnx parser, remember to make install and python setup.py build && python setup.py install to get it taking effect. (If it's not working, check whether /usr/local/lib is in your path)
After that, rebuild TensorRT OSS project.
Done. Run LD_PRELOAD=/PATH/TO/YOUR/BUILT/libnvinfer_plugin.so python test_plugin_result.py to see if it's working
Currently, to get torch-onnx-tensorrt working with custom op, you have to
- Let torch/onnx recoganize and export the custom op that not in standard opset, here we choose opset10. Thus we need to hack into symbolic_opset10.py. This is now done within the test_plugin_result.py script by manully register op into opset10 along with setting enable_onnx_checker=False during onnx export.
Let onnxparser understand how to translate the custom op exists in onnx file to TensorRT layers including plugins. Thus we need to hack into builtin_op_importers.cpp- Instead of hacking into onnxparser's source code, from TensorRT7.1, we support using onnxgraphsurgeon to modify the onnx model and replace the unknown node GridSampler with TRT_PluginV2 node, filled with corresponding buffer, refer to function modify_onnx().
- Let TensorRT know there's a new custom op named GridSampler. Here we implemented it as a plugin. Addtionally, since onnxparser only works with full dimension mode, we have to implement the trt plugin using IPluginV2DynamicExt introduced in TRT6.
- Dynamic shape can be used if you export onnx model using dynamic_axes as illustrated in test_plugin_result.py. This will be processed in onnx-tensorrt module and then addInput is called using -1 as dummpy input dim. About how to build engine and context for dynamic shape, please refer to test_plugin_result.py.
- This is not officially supported, thus an experimental sample.
- Plugin namespace is not working.
- FP16 support
- dynamic shape support in c++ sample
- dynamic shape support in python sample working with onnxparser
- FP16 support in python sample
- More tests to assure correctness
- 3D support in grid sample