forked from NVIDIA-AI-IOT/deepstream_python_apps
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README
90 lines (78 loc) · 3.44 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
Prequisites:
- DeepStreamSDK 6.0
- NVIDIA Triton Inference Server
- Python 3.6
- Gst-python
- NumPy
To set up Triton Inference Server:
For x86_64 and Jetson Docker:
1. Use the provided docker container and follow directions for
Triton Inference Server in the SDK README --
be sure to prepare the detector models.
2. Run the docker with this Python Bindings directory mapped
3. Install required Python packages inside the container:
$ apt update
$ apt install python3-gi python3-dev python3-gst-1.0 python3-numpy -y
For Jetson without Docker:
1. Install NumPy:
$ apt update
$ apt install python3-numpy
2. Follow instructions in the DeepStream SDK README to set up
Triton Inference Server:
2.1 Compile and install the nvdsinfer_customparser
2.2 Prepare at least the Triton detector models
3. Add to LD_PRELOAD:
/usr/lib/aarch64-linux-gnu/libgomp.so.1
This is to work around the following problem with TLS usage limitation:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91938
4. Clear the GStreamer cache if pipeline creation fails:
rm ~/.cache/gstreamer-1.0/*
To run the test app:
$ python3 deepstream_ssd_parser.py <h264_elementary_stream>
This document shall describe the sample deepstream-ssd-parser application.
It is meant for simple demonstration of how to make a custom neural network
output parser and use it in the pipeline to extract meaningful insights
from a video stream.
This example:
- Uses SSD neural network running on Triton Inference Server
- Selects custom post-processing in the Triton Inference Server config file
- Parses the inference output into bounding boxes
- Performs post-processing on the generated boxes with NMS (Non-maximum Suppression)
- Adds detected objects into the pipeline metadata for downstream processing
- Encodes OSD output and saves to MP4 file. Note that there is no visual output on screen.
Known Issue:
1. On Jetson, if libgomp is not preloaded, this error may occur:
(python3:21041): GStreamer-WARNING **: 14:35:44.113: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstlibav.so': /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Unable to create Encoder
2. On Jetson Nano, ssd_inception_v2 is not expected to run with GPU instance.
Switch to CPU instance when running on Nano:
update config.pbtxt files in samples/trtis_modeo_repo:
# Switch to CPU instance for Nano since memory might not be enough for
# certain Models.
# Specify CPU instance.
instance_group {
count: 1
kind: KIND_CPU
}
# Specify GPU instance.
#instance_group {
# kind: KIND_GPU
# count: 1
# gpus: 0
#}