You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
5
5
Parses ONNX models for execution with [TensorRT](https://developer.nvidia.com/tensorrt).
6
6
7
-
See also the [TensorRT documentation](https://docs.nvidia.com/deeplearning/sdk/#inference).
7
+
See also the [TensorRT documentation](https://docs.nvidia.com/deeplearning/tensorrt/api/index.html).
8
8
9
9
For the list of recent changes, see the [changelog](docs/Changelog.md).
10
10
@@ -74,7 +74,7 @@ All experimental operators will be considered unsupported by the ONNX-TRT's `sup
74
74
75
75
There are currently two officially supported tools for users to quickly check if an ONNX model can parse and build into a TensorRT engine from an ONNX file.
76
76
77
-
For C++ users, there is the [trtexec](https://github.com/NVIDIA/TensorRT/tree/main/samples/opensource/trtexec) binary that is typically found in the `<tensorrt_root_dir>/bin` directory. The basic command of running an ONNX model is:
77
+
For C++ users, there is the [trtexec](https://github.com/NVIDIA/TensorRT/tree/release/8.6/samples/trtexec) binary that can be compiled from the README in the link. The basic command of running an ONNX model is:
0 commit comments