Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The goal is to utilize the poseclassificationnet that I trained in deepstream. #95

Open
sangheonEN opened this issue May 20, 2024 · 0 comments

Comments

@sangheonEN
Copy link

The goal is to utilize the poseclassificationnet that I trained in deepstream.

After learning the pose classification model using tao toolkit pytorch backend git, we created an onnx file for use in deepstream. (https://github.com/NVIDIA/tao_pytorch_backend/tree/main/nvidia_tao_pytorch/cv/pose_classification)

And to run the pose deepstream app
After make in the path https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-pose-classification, run ./deepstream-pose-classification-app ../../.. I set up the environment to run /configs/app/deepstream_pose_classification_config.yaml.

After that, I changed the two paths below in the config_infer_third_bodypose_classification.txt file set in deepstream_pose_classification_config.yaml.

Before change (pretrained model works well)
#model-engine-file=../../../models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt_b2_gpu0_fp16.engine
#tlt-encoded-model=../../../models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt

After change (the model I learned does not work)
model-engine-file=../../../models/poseclassificationnet/pc_model.onnx
tlt-encoded-model=../../../models/poseclassificationnet/pc_model.tlt

Error syntax:

root@cfb9dc0c32a9:/home/ubuntu/deepstream/deepstream-6.4/sources/apps/deepstream_tao_apps/apps/tao_others/deepstream-pose-classification# ./deepstream-pose-classification-app ../../../configs /app/deepstream_pose_classification_config.yaml
width 1920 height 1080
video rtsp://user1:[email protected]:554/profile2/media.smp
video rtsp://user1:[email protected]:554/profile2/media.smp
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
config_file_path:/home/ubuntu/deepstream/deepstream-6.4/sources/apps/deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt
Unknown or legacy key specified 'is-classifier' for group [property]

*** nv-rtspsink: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

Now playing!
ERROR: [TRT]: 1: [runtime.cpp::parsePlan::314] Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /home/ubuntu/deepstream/deepstream-6.4/sources/apps/deepstream_tao_apps/models/poseclassificationnet/pc_model.onnx
0:00:07.399315288 437034 0x55647c102f50 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 4]: deserialize engine from file :/home/ubuntu/deepstream/deepstream-6.4/sources/apps/deepstream_tao_apps/models/poseclassificationnet/pc_model.onnx failed
0:00:07.643171261 437034 0x55647c102f50 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 4]: deserialize backend context from engine from file :/home/ubuntu/deepstream/deepstream-6.4/sources/apps/deepstream_tao_apps/models/poseclassificationnet/pc_model.onnx failed, try rebuild
0:00:07.643193788 437034 0x55647c102f50 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() [UID = 4]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:179 Uff input blob name is empty
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:728 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:794 Failed to get cuda engine from custom library API
0:00:14.298495681 437034 0x55647c102f50 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::buildModel() dsinfer_context_impl.cpp:2126> [UID = 4]: build engine file failed
free(): double free detected in tcache 2
Aborted (core dumped)

  1. The tensorrt information on the PC running the deepstream app and the tensorrt installation information on the taotoolkit pytorch backend are the same. And when converting onnx in taotoolkit pytorch backend, onnx is created using the export.py code.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant