From 4aecaae94cf59148d6cc09301803328f2c5321ca Mon Sep 17 00:00:00 2001 From: MaxJa4 <74194322+MaxJa4@users.noreply.github.com> Date: Sun, 12 Nov 2023 16:10:53 +0100 Subject: [PATCH 1/3] Update Readme.md --- doc/03_research/02_perception/Readme.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/doc/03_research/02_perception/Readme.md b/doc/03_research/02_perception/Readme.md index 36d22f9f..157efa2d 100644 --- a/doc/03_research/02_perception/Readme.md +++ b/doc/03_research/02_perception/Readme.md @@ -1,5 +1,9 @@ # Perception -This folder contains all the results of our research on perception: +This folder contains all the results of research on perception: -* [Basics](./02_basics.md) +* **PAF22** + * [Basics](./02_basics.md) + * [First implementation plan](./03_first_implementation_plan.md) +* **PAF23** + * [Pylot](./04_pylot.md) From 73b26a0eab19a65b06c4e4e8f842fa928dc3f37f Mon Sep 17 00:00:00 2001 From: MaxJa4 <74194322+MaxJa4@users.noreply.github.com> Date: Sun, 12 Nov 2023 16:11:24 +0100 Subject: [PATCH 2/3] Create pylot docs --- doc/03_research/02_perception/04_pylot.md | 54 +++++++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 doc/03_research/02_perception/04_pylot.md diff --git a/doc/03_research/02_perception/04_pylot.md b/doc/03_research/02_perception/04_pylot.md new file mode 100644 index 00000000..6673f9c2 --- /dev/null +++ b/doc/03_research/02_perception/04_pylot.md @@ -0,0 +1,54 @@ +# Pylot - Perception + +**Authors:** Maximilian Jannack + +**Date:** 2.11.2023 + +--- + +## [Detection](https://pylot.readthedocs.io/en/latest/perception.detection.html) + +### Obstacle detection + +Pylot provides two options for obstacle detection: + +1. Obstacle detection operator that can use any model that adheres to the Tensorflow `object detection model zoo` + - By default, three models that were trained on 1080p CARLA images (`faster-rcnn`, `ssd-mobilenet-fpn-640`, and `ssdlit-mobilenet-v2`) are provided + - Models that have been trained on other data sets can be easily plugged in +2. Operator that can infer any of the EfficientDet models (not trained on CARLA data, but on the COCO dataset) + +### Traffic light detection + +Uses `Faster RCNN weight` (trained on 1080p CARLA images) + +### Lane detection + +Uses the `Lanenet` model ([repo](https://github.com/MaybeShewill-CV/lanenet-lane-detection)) or canny edge detector. + +--- + +## [Obstacle Tracking](https://pylot.readthedocs.io/en/latest/perception.tracking.html) + +For tracking obstacles across frames. +Uses the `DaSiamRPN` model ([repo](https://github.com/foolwood/DaSiamRPN)) to serially track multiple obstacles. + +--- + +## [Depth Estimation](https://pylot.readthedocs.io/en/latest/perception.depth_estimation.html) + +Uses stereo cameras to estimate depth with the `AnyNet` model ([repo](https://github.com/mileyan/AnyNet)). +Configurable camera distance between left and right. + +--- + +## [Segmentation](https://pylot.readthedocs.io/en/latest/perception.segmentation.html) + +Different approaches, such as using the `DRN` model ([repo](https://github.com/ICGog/drn)) for segmenting camera images. +No model with training on CARLA data available, output of segmentation component not used in Pylot right now. + +--- + +## [Lidar](https://github.com/erdos-project/pylot/blob/master/pylot/perception/point_cloud.py) + +Pylot contains a few helpful function for handling point clouds from the Lidar sensor. +It can for example merge two point clouds or map the points to a camera image. From ca1224116ca6bba9c34e5a5ff45bb3e9ca4c7327 Mon Sep 17 00:00:00 2001 From: MaxJa4 <74194322+MaxJa4@users.noreply.github.com> Date: Sun, 12 Nov 2023 16:31:07 +0100 Subject: [PATCH 3/3] Correct date --- doc/03_research/02_perception/04_pylot.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/03_research/02_perception/04_pylot.md b/doc/03_research/02_perception/04_pylot.md index 6673f9c2..3b82e29e 100644 --- a/doc/03_research/02_perception/04_pylot.md +++ b/doc/03_research/02_perception/04_pylot.md @@ -2,7 +2,7 @@ **Authors:** Maximilian Jannack -**Date:** 2.11.2023 +**Date:** 12.11.2023 ---