Skip to content

Joint Detection and Embedding for fast multi-object tracking

Notifications You must be signed in to change notification settings

PartheshSoni/Towards-Realtime-MOT

 
 

Repository files navigation

Towards Realtime Multi-Object Tracking

This repo is the a codebase of the Joint Detection and Embedding (JDE) model. JDE is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network. Techical details are described in our ECCV 2020 paper. By using this repo, you can simply achieve MOTA 64%+ on the "private" protocol of MOT-16 challenge, and with a near real-time speed at 22~38 FPS (Note this speed is for the entire system, including the detection step! ).

Faster object tracking pipeline for real time tracking (https://arxiv.org/abs/2011.03910v1) is introduced in this project repository.

We hope this repo will help researches/engineers to develop more practical MOT systems. For algorithm development, we provide training data, baseline models and evaluation methods to make a level playground. For application usage, we also provide a small video demo that takes raw videos as input without any bells and whistles.

Requirements

  • Python 3.6
  • Pytorch >= 1.2.0
  • python-opencv
  • py-motmetrics (pip install motmetrics)
  • cython-bbox (pip install cython_bbox)
  • (Optional) ffmpeg (used in the video demo)
  • (Optional) syncbn (compile and place it under utils/syncbn, or simply replace with nn.BatchNorm here)
  • Apex

Video Demo

Usage:


Usage:

python demo.py --input-video path/to/your/input/video --weights path/to/model/weights --output-format video --output-root path/to/output/root --batch-size n


## Pretrained model and baseline models
Darknet-53 ImageNet pretrained model: [[DarkNet Official]](https://pjreddie.com/media/files/darknet53.conv.74)


## Test on MOT-16 Challenge

python track.py --cfg ./cfg/yolov3_1088x608.cfg --weights /path/to/model/weights

By default the script runs evaluation on the MOT-16 training set. If you want to evaluate on the test set, please add `--test-mot16` to the command line.
Results are saved in text files in `$DATASET_ROOT/results/*.txt`. You can also add `--save-images` or `--save-videos` flags to obtain the visualized results. Visualized results are saved in `$DATASET_ROOT/outputs/`


About

Joint Detection and Embedding for fast multi-object tracking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 59.6%
  • C++ 21.4%
  • Cuda 18.1%
  • Shell 0.5%
  • CSS 0.1%
  • HTML 0.1%
  • Other 0.2%