Skip to content

Latest commit

 

History

History
132 lines (108 loc) · 7.48 KB

README.md

File metadata and controls

132 lines (108 loc) · 7.48 KB

NVIDIA Deepstream 6.1 Python boilerplate

Boilerplate for building NVIDIA Deepstream 6.1 pipelines in Python. 📖Related blogpost

Repository structure

This repository contains one base Pipeline class and a number of custom pipeline subclasses (deepstream/app/pipelines/) to perform various video analytics tasks:

The files are structured as follows:

  • app: contains the base pipeline implementation as well as custom pipeline subclasses for various use cases.
  • configs: contains the Deepstream configuration files
  • data: contains the data such as models weights and videos
deepstream
├── app
│   ├── pipelines
│   └── utils
├── configs
│   ├── pgies
│   └── trackers
└── data
    ├── pgies
    │   ├── yolov4
    │   └── yolov4-tiny
    └── videos

Development setup

This project is based on the Deepstream 6.1 SDK and tested on an Ubuntu 20.04 VM with NVIDIA T4/P100 GPU (8vCPU, 30GB RAM). Minor changes might be required for Jetson devices.

Prerequisites

Install NVIDIA driver version 510.47.03:

sudo apt update
sudo apt install gcc make
curl -O https://us.download.nvidia.com/XFree86/Linux-x86_64/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
chmod 755 NVIDIA-Linux-x86_64-510.47.03.run
sudo ./NVIDIA-Linux-x86_64-510.47.03.run

Verify the installation with:

nvidia-smi

Setup Docker and the NVIDIA Container Toolkit following the NVIDIA container toolkit install guide.

Models (optional)

The instruction below describe how to obtain the model files. These steps are optional as these model files are already included in the repository via Git LFS.

YOLOv4 (Object detection)

YOLOv4 is now part of the TAO (Train, Adapt and Optimize) toolkit and can be used in Deepstream directly with the .etlt file.

OSNet (Re-identification)

Download the desired .pth model weights from the deep-person-reid model zoo. Convert the Pytorch model to ONNX using deepstream/scripts/pytorch_to_onnx.py

Convert the OSNet ONNX file to TensorRT (TRT) on your target GPU:

Run the tensorrt container with access to the target GPU. Be sure to select the correct tensorrt container version to match your desired TRT version (release notes).

docker run --gpus all -it -v ~/deepstream-python/deepstream/data/sgies/osnet:/trt nvcr.io/nvidia/tensorrt:20.12-py3

Inside the container:

bash /opt/tensorrt/install_opensource.sh -b 21.06
cd /workspace/tensorrt/bin
./trtexec --onnx=/trt/osnet_x0_25_msmt17.onnx --shapes=input:8x3x256x128 --minShapes=input:1x3x256x128 --maxShapes=input:32x3x256x128 --optShapes=input:8x3x256x128 --saveEngine=/trt/osnet_x0_25_msmt17.trt

Get started

Clone the repository to a local directory, e.g. ~/deepstream-python:

git clone https://github.com/ml6team/deepstream-python.git

Download the data folder from Google Drive and add it to the root of the repository. For example using gdown (inside the deepstream/ directory):

pip3 install gdown
gdown https://drive.google.com/u/0/uc?id=1xu22FTWw0Cx4bOgrKUVtZU1VMHt3Oml8&export=download
unzip data.zip

Build the container image by running the following command inside the deepstream/ directory (where the Dockerfile is located):

docker build -t deepstream .

Next, run the container with:

docker run -it --gpus all -v ~/deepstream-python/output:/app/output deepstream python3 run.py <URI>

Where URI is a file path (file://...) or RTSP URL (rtsp://...) to a video stream.

For example:

docker run -it --gpus all -v ~/deepstream-python/output:/app/output deepstream python3 run.py 'file:///app/data/videos/sample_720p.h264'

The -v option mounts the output directory to the container. After the pipeline is run, deepstream-python/output will contain the results. For the base pipeline, this is a video (out.mp4) with bounding boxes drawn.

Debugging

The Deepstream Docker container already contains gdb. You can use it as follows inside the container:

gdb -ex r --args python3 run.py <URI>

References

Caveats

Contact