- The paper is under submission...
-
2023-2-17
: We've uploaded the experiment results along with some code, and the full code will be released soon! -
2023-8-26
: We have uploaded part of the code and the full code will be released soon!
- We used the BDD100K as our datasets,and experiments are run on NVIDIA TESLA V100.
- model : trained on the BDD100k train set and test on the BDD100k val set .
- Note: The raw video comes from HybridNets
- The results of our experiments are as follows:
- The results on the BDD100k val set.
We compare YOLOPv3 with YOLOP and HybridNets on the NVIDIA RTX 3080.
Model | Backbone | Params | Flops | Speed (fps) |
---|---|---|---|---|
YOLOP |
CSPDarknet | 7.9M | 9.3G | 39 |
HybridNets |
EfficientNet | 12.8M | 7.8G | 17 |
YOLOPv3 (no MRP) |
ELAN-Net | 30.9M | 36.0G | 26 |
YOLOPv3 (MRP) |
ELAN-Net | 30.2M | 35.4G | 37 |
Result | Visualization | |||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
![]() |
Result | Visualization | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
![]() |
Result | Visualization | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
![]() |
├─inference
│ ├─image # inference images
│ ├─image_output # inference result
├─lib
│ ├─config/default # configuration of training and validation
│ ├─core
│ │ ├─activations.py # activation function
│ │ ├─evaluate.py # calculation of metric
│ │ ├─function.py # training and validation of model
│ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization
│ │ ├─loss.py # loss function
│ │ ├─yolov7_loss.py # yolov7's loss function
│ │ ├─yolov7_general.py # yolov7's general function
│ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper)
│ ├─dataset
│ │ ├─AutoDriveDataset.py # Superclass dataset,general function
│ │ ├─bdd.py # Subclass dataset,specific function
│ │ ├─convect.py
│ │ ├─DemoDataset.py # demo dataset(image, video and stream)
│ ├─models
│ │ ├─YOLOP.py # Setup and Configuration of model
│ │ ├─commom.py # calculation module
│ ├─utils
│ │ ├─augmentations.py # data augumentation
│ │ ├─autoanchor.py # auto anchor(k-means)
│ │ ├─split_dataset.py # (Campus scene, unrelated to paper)
│ │ ├─plot.py # plot_box_and_mask
│ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training
│ ├─run
│ │ ├─dataset/training time # Visualization, logging and model_save
├─tools
│ │ ├─demo.py # demo(folder、camera)
│ │ ├─test.py
│ │ ├─train.py
├─weights # Pretraining model
This codebase has been developed with python version 3.7, PyTorch 1.12+ and torchvision 0.13+
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
or
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
See requirements.txt
for additional dependencies and version requirements.
pip install -r requirements.txt
You can get the pre-trained model from here. Extraction code:jbty
For BDD100K: imgs, det_annot, da_seg_annot, ll_seg_annot
We recommend the dataset directory structure to be the following:
# The id represent the correspondence relation
├─dataset root
│ ├─images
│ │ ├─train
│ │ ├─val
│ ├─det_annotations
│ │ ├─train
│ │ ├─val
│ ├─da_seg_annotations
│ │ ├─train
│ │ ├─val
│ ├─ll_seg_annotations
│ │ ├─train
│ │ ├─val
Update the your dataset path in the ./lib/config/default.py
.
coming soon......
python tools/test.py --weights weights/epoch-189.pth
You can store the image or video in --source
, and then save the reasoning result to --save-dir
python tools/demo.py --weights weights/epoch-189.pth
--source inference/image
--save-dir inference/image_output
--conf-thres 0.3
--iou-thres 0.45
YOLOPv3 is released under the MIT Licence.
Our work would not be complete without the wonderful work of the following authors: