-
Notifications
You must be signed in to change notification settings - Fork 351
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
cherry pick jetpack6.1 build into release/2.5 branch (#3285)
- Loading branch information
1 parent
6fbd679
commit 9808550
Showing
8 changed files
with
216 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
.. _Torch_TensorRT_in_JetPack_6.1 | ||
Overview | ||
################## | ||
|
||
JetPack 6.1 | ||
--------------------- | ||
Nvida JetPack 6.1 is the latest production release ofJetPack 6. | ||
With this release it incorporates: | ||
CUDA 12.6 | ||
TensorRT 10.3 | ||
cuDNN 9.3 | ||
DLFW 24.09 | ||
|
||
You can find more details for the JetPack 6.1: | ||
|
||
* https://docs.nvidia.com/jetson/jetpack/release-notes/index.html | ||
* https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html | ||
|
||
|
||
Prerequisites | ||
~~~~~~~~~~~~~~ | ||
|
||
|
||
Ensure your jetson developer kit has been flashed with the latest JetPack 6.1. You can find more details on how to flash Jetson board via sdk-manager: | ||
|
||
* https://developer.nvidia.com/sdk-manager | ||
|
||
|
||
check the current jetpack version using | ||
|
||
.. code-block:: sh | ||
apt show nvidia-jetpack | ||
Ensure you have installed JetPack Dev components. This step is required if you need to build on jetson board. | ||
|
||
You can only install the dev components that you require: ex, tensorrt-dev would be the meta-package for all TRT development or install everthing. | ||
|
||
.. code-block:: sh | ||
# install all the nvidia-jetpack dev components | ||
sudo apt-get update | ||
sudo apt-get install nvidia-jetpack | ||
Ensure you have cuda 12.6 installed(this should be installed automatically from nvidia-jetpack) | ||
|
||
.. code-block:: sh | ||
# check the cuda version | ||
nvcc --version | ||
# if not installed or the version is not 12.6, install via the below cmd: | ||
sudo apt-get update | ||
sudo apt-get install cuda-toolkit-12-6 | ||
Ensure libcusparseLt.so exists at /usr/local/cuda/lib64/: | ||
|
||
.. code-block:: sh | ||
# if not exist, download and copy to the directory | ||
wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz | ||
tar xf libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz | ||
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/include/* /usr/local/cuda/include/ | ||
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/lib/* /usr/local/cuda/lib64/ | ||
Build torch_tensorrt | ||
~~~~~~~~~~~~~~ | ||
|
||
|
||
Install bazel | ||
|
||
.. code-block:: sh | ||
wget -v https://github.com/bazelbuild/bazelisk/releases/download/v1.20.0/bazelisk-linux-arm64 | ||
sudo mv bazelisk-linux-arm64 /usr/bin/bazel | ||
chmod +x /usr/bin/bazel | ||
Install pip and required python packages: | ||
* https://pip.pypa.io/en/stable/installation/ | ||
|
||
.. code-block:: sh | ||
# install pip | ||
wget https://bootstrap.pypa.io/get-pip.py | ||
python get-pip.py | ||
.. code-block:: sh | ||
# install pytorch from nvidia jetson distribution: https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch | ||
python -m pip install torch https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl | ||
.. code-block:: sh | ||
# install required python packages | ||
python -m pip install -r toolchains/jp_workspaces/requirements.txt | ||
# if you want to run the test cases, then install the test required python packages | ||
python -m pip install -r toolchains/jp_workspaces/test_requirements.txt | ||
Build and Install torch_tensorrt wheel file | ||
|
||
|
||
Since torch_tensorrt version has dependencies on torch version. torch version supported by JetPack6.1 is from DLFW 24.08/24.09(torch 2.5.0). | ||
|
||
Please make sure to build torch_tensorrt wheel file from source release/2.5 branch | ||
(TODO: lanl to update the branch name once release/ngc branch is available) | ||
|
||
.. code-block:: sh | ||
cuda_version=$(nvcc --version | grep Cuda | grep release | cut -d ',' -f 2 | sed -e 's/ release //g') | ||
export TORCH_INSTALL_PATH=$(python -c "import torch, os; print(os.path.dirname(torch.__file__))") | ||
export SITE_PACKAGE_PATH=${TORCH_INSTALL_PATH::-6} | ||
export CUDA_HOME=/usr/local/cuda-${cuda_version}/ | ||
# replace the MODULE.bazel with the jetpack one | ||
cat toolchains/jp_workspaces/MODULE.bazel.tmpl | envsubst > MODULE.bazel | ||
# build and install torch_tensorrt wheel file | ||
python setup.py --use-cxx11-abi install --user | ||
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
module( | ||
name = "torch_tensorrt", | ||
repo_name = "org_pytorch_tensorrt", | ||
version = "${BUILD_VERSION}" | ||
) | ||
|
||
bazel_dep(name = "googletest", version = "1.14.0") | ||
bazel_dep(name = "platforms", version = "0.0.10") | ||
bazel_dep(name = "rules_cc", version = "0.0.9") | ||
bazel_dep(name = "rules_python", version = "0.34.0") | ||
|
||
python = use_extension("@rules_python//python/extensions:python.bzl", "python") | ||
python.toolchain( | ||
ignore_root_user_error = True, | ||
python_version = "3.11", | ||
) | ||
|
||
bazel_dep(name = "rules_pkg", version = "1.0.1") | ||
git_override( | ||
module_name = "rules_pkg", | ||
commit = "17c57f4", | ||
remote = "https://github.com/narendasan/rules_pkg", | ||
) | ||
|
||
local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "local_repository") | ||
|
||
# External dependency for torch_tensorrt if you already have precompiled binaries. | ||
local_repository( | ||
name = "torch_tensorrt", | ||
path = "${SITE_PACKAGE_PATH}/torch_tensorrt", | ||
) | ||
|
||
|
||
new_local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "new_local_repository") | ||
|
||
# CUDA should be installed on the system locally | ||
new_local_repository( | ||
name = "cuda", | ||
build_file = "@//third_party/cuda:BUILD", | ||
path = "${CUDA_HOME}", | ||
) | ||
|
||
new_local_repository( | ||
name = "libtorch", | ||
path = "${TORCH_INSTALL_PATH}", | ||
build_file = "third_party/libtorch/BUILD", | ||
) | ||
|
||
new_local_repository( | ||
name = "libtorch_pre_cxx11_abi", | ||
path = "${TORCH_INSTALL_PATH}", | ||
build_file = "third_party/libtorch/BUILD" | ||
) | ||
|
||
new_local_repository( | ||
name = "tensorrt", | ||
path = "/usr/", | ||
build_file = "@//third_party/tensorrt/local:BUILD" | ||
) | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
setuptools==70.2.0 | ||
numpy<2.0.0 | ||
packaging | ||
pyyaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
expecttest==0.1.6 | ||
networkx==2.8.8 | ||
numpy<2.0.0 | ||
parameterized>=0.2.0 | ||
pytest>=8.2.1 | ||
pytest-xdist>=3.6.1 | ||
pyyaml | ||
transformers | ||
# TODO: currently timm torchvision nvidia-modelopt does not have distributions for jetson |