diff --git a/doc/06_perception/07_vision_node.md b/doc/06_perception/07_vision_node.md index a21f9055..1b72ead4 100644 --- a/doc/06_perception/07_vision_node.md +++ b/doc/06_perception/07_vision_node.md @@ -12,18 +12,17 @@ The following code shows how the Vision-Node is specified in perception.launch - ` - Depending on preferences and targets a different model can be used by replacing the value of the model parameter by one of the lines from the comment above. @@ -32,7 +31,6 @@ The Vision-Node will automatically switch between object-detection, imagesegment For now the Vision-Node only supports pyTorch models. Within the next sprint it should be able to accept other frameworks aswell. It should also be possible to run object-detection and image-segmentation at the same time. - ## How it works ### Initialization @@ -61,7 +59,6 @@ This function is automatically triggered by the Camera-Subscriber of the Vision- 5. Convert CV2-Image to ImageMsg 6. Publish ImageMsg over ImagePublisher - ## Visualization The Vision-Node implements an ImagePublisher under the topic: "/paf//Center/segmented_image" @@ -72,10 +69,10 @@ The Configuartion File of RViz has been changed accordingly to display the publi ### Time -First experiments showed that the handle_camera_image function is way to slow to be used reliably. It takes around 1.5 seconds to handle one image. +First experiments showed that the handle_camera_image function is way to slow to be used reliably. It takes around 1.5 seconds to handle one image. Right now the Vision-Node is not using cuda due to cuda-memory-issues that couldn't be fixed right away. The performance is expected to rise quite a bit when using cuda. -Also their is lots more room for testing different models inside the Vision-Node to evualte their accuracy and time-performance. \ No newline at end of file +Also their is lots more room for testing different models inside the Vision-Node to evualte their accuracy and time-performance.