From a849370fe1db138458ab352d2d8ee6977eb344fe Mon Sep 17 00:00:00 2001 From: abdulazizab2 Date: Sat, 15 Apr 2023 20:55:15 +0300 Subject: [PATCH] fix link to unavailable docs and pages 1. TensorRT documentation page link showed unavailable page 2. trtexec link page link showed unavailable Signed-off-by: abdulazizab2 --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 952b789e..e32d87cb 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ Parses ONNX models for execution with [TensorRT](https://developer.nvidia.com/tensorrt). -See also the [TensorRT documentation](https://docs.nvidia.com/deeplearning/sdk/#inference). +See also the [TensorRT documentation](https://docs.nvidia.com/deeplearning/tensorrt/api/index.html). For the list of recent changes, see the [changelog](docs/Changelog.md). @@ -74,7 +74,7 @@ All experimental operators will be considered unsupported by the ONNX-TRT's `sup There are currently two officially supported tools for users to quickly check if an ONNX model can parse and build into a TensorRT engine from an ONNX file. -For C++ users, there is the [trtexec](https://github.com/NVIDIA/TensorRT/tree/main/samples/opensource/trtexec) binary that is typically found in the `/bin` directory. The basic command of running an ONNX model is: +For C++ users, there is the [trtexec](https://github.com/NVIDIA/TensorRT/tree/release/8.6/samples/trtexec) binary that can be compiled from the README in the link. The basic command of running an ONNX model is: `trtexec --onnx=model.onnx`