Skip to content

Commit

Permalink
Update segmentation.md
Browse files Browse the repository at this point in the history
  • Loading branch information
RoyaLxPole authored Oct 31, 2024
1 parent 3ad9f53 commit 22c8434
Showing 1 changed file with 28 additions and 17 deletions.
45 changes: 28 additions & 17 deletions doc/research/paf24/segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
**Summary:** This page contains the research into the segmentation component of Autoware.

- [Segmentation](#Segmentation)
- [Already implemended solutions](#1.-Already-implemented-solutions)
- [Implemented but dropped](#2Implemented-but-dropped)
- [Carla Sensors](#3-Carla-Sensors)
- [Follow up Question](#4-Follow-up-Question)
- [Already implemented solutions](#already-implemented-solutions)
- [Implemented but dropped](#implemented-but-dropped)
- [Carla Sensors](#carla-sensors)
- [Follow-up Question](#follow-up-question)

## Already implemented solutions

Expand All @@ -19,23 +19,34 @@ https://github.com/una-auxme/paf/blob/8e8f9a1a03ae09d5ac763c1a11b398fc1ce144b0/c
https://github.com/una-auxme/paf/blob/8c968fb5c6c44c15b2733c5a181c496eb9b244be/doc/perception/efficientps.md#efficientps

## Carla Sensors:
https://carla.readthedocs.io/en/0.8.4/cameras_and_sensors/
```
camera = carla.sensor.Camera('MyCamera', PostProcessing='SemanticSegmentation')
camera.set(FOV=90.0)
camera.set_image_size(800, 600)
camera.set_position(x=0.30, y=0, z=1.30)
camera.set_rotation(pitch=0, yaw=0, roll=0)
![Alt text](https://carla.readthedocs.io/en/0.9.14/img/ref_sensors_semantic.jpg)
![Alt text](https://carla.readthedocs.io/en/0.9.14/img/tuto_sem.jpg)

carla_settings.add_sensor(camera)
```
![Alt text](https://github.com/una-auxme/paf/blob/368-visionnode-and-segmentation/doc/assets/perception/Carla_Segmentation_Sensor.png)
for more context:
the pedestrian walks will be labled as roadlines

example:
```
# --------------
# Add a new semantic segmentation camera to my ego
# --------------
sem_cam = None
sem_bp = world.get_blueprint_library().find('sensor.camera.semantic_segmentation')
sem_bp.set_attribute("image_size_x",str(1920))
sem_bp.set_attribute("image_size_y",str(1080))
sem_bp.set_attribute("fov",str(105))
sem_location = carla.Location(2,0,1)
sem_rotation = carla.Rotation(0,180,0)
sem_transform = carla.Transform(sem_location,sem_rotation)
sem_cam = world.spawn_actor(sem_bp,sem_transform,attach_to=ego_vehicle, attachment_type=carla.AttachmentType.Rigid)
# This time, a color converter is applied to the image, to get the semantic segmentation view
sem_cam.listen(lambda image: image.save_to_disk('tutorial/new_sem_output/%.6d.jpg' % image.frame,carla.ColorConverter.CityScapesPalette))
there is another solution implemented by the carla simulator:
```
For more information:
https://carla.readthedocs.io/en/0.9.14/ref_sensors/#semantic-segmentation-camera:~:text=the%20object%20it.-,Semantic%20segmentation%20camera,-Blueprint%3A%20sensor

for more context:
the pedestrian walks will be labeld as roadlines
https://carla.readthedocs.io/en/0.9.14/tuto_G_retrieve_data/#semantic-segmentation-camera:~:text=on%20the%20right.-,Semantic%20segmentation%20camera,-The%20semantic%20segmentation

## Follow up Question:
Why the last group used bounding boxes and not the segmentation model is it to slow or maybe not reliable?

0 comments on commit 22c8434

Please sign in to comment.