Skip to content

jiaoZ7688/YOLOPv3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YOLOPv3

Generic badge PyTorch - Version Python - Version

Paper

  • The paper is under submission...

News

  • 2023-2-17: We've uploaded the experiment results along with some code, and the full code will be released soon!

  • 2023-8-26: We have uploaded part of the code and the full code will be released soon!

Results

  • We used the BDD100K as our datasets,and experiments are run on NVIDIA TESLA V100.
  • model : trained on the BDD100k train set and test on the BDD100k val set .

video visualization Results

  • Note: The raw video comes from HybridNets
  • The results of our experiments are as follows:

image visualization Results

  • The results on the BDD100k val set.

Model parameter and inference speed

We compare YOLOPv3 with YOLOP and HybridNets on the NVIDIA RTX 3080.

Model Backbone Params Flops Speed (fps)
YOLOP CSPDarknet 7.9M 9.3G 39
HybridNets EfficientNet 12.8M 7.8G 17
YOLOPv3 (no MRP) ELAN-Net 30.9M 36.0G 26
YOLOPv3 (MRP) ELAN-Net 30.2M 35.4G 37

Traffic Object Detection Result

Result Visualization
Model Recall (%) [email protected] (%)
MultiNet 81.3 60.2
DLT-Net 89.4 68.4
Faster R-CNN 77.2 55.6
YOLOv5s 86.8 77.2
YOLOP 89.2 76.5
HybridNets 92.8 77.3
YOLOPv2 91.1 83.4
YOLOPv3 96.9 84.3

Drivable Area Segmentation

Result Visualization
Model Drivable mIoU (%)
MultiNet 71.6
DLT-Net 71.3
PSPNet 89.6
YOLOP 91.5
HybridNets 90.5
YOLOPv2 93.2
YOLOPv3 93.2

Lane Line Detection

Result Visualization
Model Accuracy (%) Lane Line IoU (%)
Enet 34.12 14.64
SCNN 35.79 15.84
Enet-SAD 36.56 16.02
YOLOP 70.5 26.2
HybridNets 85.4 31.6
YOLOPv2 87.3 27.2
YOLOPv3 88.3 28.0

Project Structure

├─inference
│ ├─image   # inference images
│ ├─image_output   # inference result
├─lib
│ ├─config/default   # configuration of training and validation
│ ├─core    
│ │ ├─activations.py   # activation function
│ │ ├─evaluate.py   # calculation of metric
│ │ ├─function.py   # training and validation of model
│ │ ├─general.py   #calculation of metric、nms、conversion of data-format、visualization
│ │ ├─loss.py   # loss function
│ │ ├─yolov7_loss.py   # yolov7's loss function
│ │ ├─yolov7_general.py   # yolov7's general function
│ │ ├─postprocess.py   # postprocess(refine da-seg and ll-seg, unrelated to paper)
│ ├─dataset
│ │ ├─AutoDriveDataset.py   # Superclass dataset,general function
│ │ ├─bdd.py   # Subclass dataset,specific function
│ │ ├─convect.py 
│ │ ├─DemoDataset.py   # demo dataset(image, video and stream)
│ ├─models
│ │ ├─YOLOP.py    # Setup and Configuration of model
│ │ ├─commom.py   # calculation module
│ ├─utils
│ │ ├─augmentations.py    # data augumentation
│ │ ├─autoanchor.py   # auto anchor(k-means)
│ │ ├─split_dataset.py  # (Campus scene, unrelated to paper)
│ │ ├─plot.py  # plot_box_and_mask
│ │ ├─utils.py  # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training
│ ├─run
│ │ ├─dataset/training time  # Visualization, logging and model_save
├─tools
│ │ ├─demo.py    # demo(folder、camera)
│ │ ├─test.py    
│ │ ├─train.py    
├─weights    # Pretraining model

Requirement

This codebase has been developed with python version 3.7, PyTorch 1.12+ and torchvision 0.13+

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

or

conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch

See requirements.txt for additional dependencies and version requirements.

pip install -r requirements.txt

Pre-trained Model

You can get the pre-trained model from here. Extraction code:jbty

Dataset

For BDD100K: imgs, det_annot, da_seg_annot, ll_seg_annot

We recommend the dataset directory structure to be the following:

# The id represent the correspondence relation
├─dataset root
│ ├─images
│ │ ├─train
│ │ ├─val
│ ├─det_annotations
│ │ ├─train
│ │ ├─val
│ ├─da_seg_annotations
│ │ ├─train
│ │ ├─val
│ ├─ll_seg_annotations
│ │ ├─train
│ │ ├─val

Update the your dataset path in the ./lib/config/default.py.

Training

coming soon......

Evaluation

python tools/test.py --weights weights/epoch-189.pth

Demo

You can store the image or video in --source, and then save the reasoning result to --save-dir

python tools/demo.py --weights weights/epoch-189.pth
                     --source inference/image
                     --save-dir inference/image_output
                     --conf-thres 0.3
                     --iou-thres 0.45

License

YOLOPv3 is released under the MIT Licence.

Acknowledgements

Our work would not be complete without the wonderful work of the following authors:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages