diff --git a/doc/research/paf24/segmentation.md b/doc/research/paf24/segmentation.md index 4f81577e..674f749b 100644 --- a/doc/research/paf24/segmentation.md +++ b/doc/research/paf24/segmentation.md @@ -11,16 +11,16 @@ ## Already implemented solutions -https://github.com/una-auxme/paf/blob/c3011ee70039e199e106c54aa162a8f52be241a6/code/perception/launch/perception.launch?plain=1#L59-L61 + probably trained with the generated dataset: -https://github.com/una-auxme/paf/blob/8e8f9a1a03ae09d5ac763c1a11b398fc1ce144b0/code/perception/src/dataset_generator.py#L109-L110 + -## Implemented but dropped: +## Implemented but dropped -https://github.com/una-auxme/paf/blob/8c968fb5c6c44c15b2733c5a181c496eb9b244be/doc/perception/efficientps.md#efficientps + -## Carla Sensors: +## Carla Sensors ![Alt text](https://carla.readthedocs.io/en/0.9.14/img/ref_sensors_semantic.jpg) ![Alt text](https://carla.readthedocs.io/en/0.9.14/img/tuto_sem.jpg) @@ -30,7 +30,7 @@ the pedestrian walks will be labled as roadlines example: -``` +```Python # -------------- # Add a new semantic segmentation camera to my ego @@ -50,10 +50,10 @@ sem_cam.listen(lambda image: image.save_to_disk('tutorial/new_sem_output/%.6d.jp ``` For more information: -https://carla.readthedocs.io/en/0.9.14/ref_sensors/#semantic-segmentation-camera:~:text=the%20object%20it.-,Semantic%20segmentation%20camera,-Blueprint%3A%20sensor + -https://carla.readthedocs.io/en/0.9.14/tuto_G_retrieve_data/#semantic-segmentation-camera:~:text=on%20the%20right.-,Semantic%20segmentation%20camera,-The%20semantic%20segmentation + -## Follow up Question: +## Follow up Question Why the last group used bounding boxes and not the segmentation model is it to slow or maybe not reliable?