diff --git a/doc/research/paf24/segmentation.md b/doc/research/paf24/segmentation.md new file mode 100644 index 00000000..674f749b --- /dev/null +++ b/doc/research/paf24/segmentation.md @@ -0,0 +1,59 @@ + +# Segmentation + +**Summary:** This page contains the research into the segmentation component of Autoware. + +- [Segmentation](#Segmentation) + - [Already implemented solutions](#already-implemented-solutions) + - [Implemented but dropped](#implemented-but-dropped) + - [Carla Sensors](#carla-sensors) + - [Follow-up Question](#follow-up-question) + +## Already implemented solutions + + + +probably trained with the generated dataset: + + +## Implemented but dropped + + + +## Carla Sensors + +![Alt text](https://carla.readthedocs.io/en/0.9.14/img/ref_sensors_semantic.jpg) +![Alt text](https://carla.readthedocs.io/en/0.9.14/img/tuto_sem.jpg) + +for more context: +the pedestrian walks will be labled as roadlines + +example: + +```Python + +# -------------- +# Add a new semantic segmentation camera to my ego +# -------------- +sem_cam = None +sem_bp = world.get_blueprint_library().find('sensor.camera.semantic_segmentation') +sem_bp.set_attribute("image_size_x",str(1920)) +sem_bp.set_attribute("image_size_y",str(1080)) +sem_bp.set_attribute("fov",str(105)) +sem_location = carla.Location(2,0,1) +sem_rotation = carla.Rotation(0,180,0) +sem_transform = carla.Transform(sem_location,sem_rotation) +sem_cam = world.spawn_actor(sem_bp,sem_transform,attach_to=ego_vehicle, attachment_type=carla.AttachmentType.Rigid) +# This time, a color converter is applied to the image, to get the semantic segmentation view +sem_cam.listen(lambda image: image.save_to_disk('tutorial/new_sem_output/%.6d.jpg' % image.frame,carla.ColorConverter.CityScapesPalette)) + +``` + +For more information: + + + + +## Follow up Question + +Why the last group used bounding boxes and not the segmentation model is it to slow or maybe not reliable?