Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add overview diagram of the perception and rearrange README file #501

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

Toni2go
Copy link
Collaborator

@Toni2go Toni2go commented Nov 13, 2024

Description

A diagram providing an overview of the perception was added.
The README file was rearranged so that documentation files of nodes which are working together to provide a specific functionality are listed together.

Fixes #500

Type of change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)

Does this PR introduce a breaking change?

no

Most important changes

README.md and new file in assets/perception

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works (might be obsolete with CI later on)
  • New and existing unit tests pass locally with my changes (might be obsolete with CI later on)

Summary by CodeRabbit

  • Documentation
    • Updated and reorganized the README for perception components to enhance clarity and readability.
    • Introduced distinct sections for "Object Detection / Distance and Segmentation," "Localization," and "Unused files."
    • Added a diagram titled "Perception Overview" to illustrate the nodes and topics used in the perception system.
    • Expanded descriptions for key components and reformatted entries for consistency.

@Toni2go Toni2go linked an issue Nov 13, 2024 that may be closed by this pull request
Copy link
Contributor

coderabbitai bot commented Nov 13, 2024

Walkthrough

The changes in this pull request involve the reorganization and updating of the documentation in doc/perception/README.md. The introductory text has been modified for better readability, and the perception components have been categorized into distinct sections: "Object Detection / Distance and Segmentation," "Localization," and "Unused files." Each section has been reformatted for consistency, and notes regarding unused files have been added to clarify their status.

Changes

File Change Summary
doc/perception/README.md Reorganized documentation into sections; updated introductory text; reformatted entries for clarity; added a new diagram titled "Perception Overview."

Assessment against linked issues

Objective Addressed Explanation
Add an overview of the perception in the documentation (#500)

Possibly related PRs

Suggested labels

documentation, general, system

Suggested reviewers

  • Lukasnol

🐇 "In the garden of knowledge, we hop and we play,
With sections and clarity, we brighten the day.
From objects to localization, all nicely aligned,
Unused files noted, for clarity defined.
So let’s leap through the pages, with joy we will sing,
For a well-organized README, oh what joy it will bring!" 🐇


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Toni2go Toni2go added this to the Sprint 02 2024-11-25 milestone Nov 13, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
doc/perception/README.md (3)

3-8: Optimize the documentation structure and improve readability.

  1. The diagram path can be simplified as we're already in the doc directory.
  2. The introduction text could be more polished.

Consider these improvements:

-![Perception Overview](../../doc/assets/perception/perception_overview.png)
+![Perception Overview](../assets/perception/perception_overview.png)
-In the following diagram you can see all the nodes and topics used in the perception to show how they work together.
+In the following diagram, you can see all the nodes and topics used in the perception system and how they interact with each other.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~5-~5: Possible missing comma found.
Context: ...erception components. In the following diagram you can see all the nodes and topics us...

(AI_HYDRA_LEO_MISSING_COMMA)


11-16: Maintain consistent documentation style across components.

Currently, only the Vision Node has a description. Consider adding brief descriptions for all components to maintain consistency and improve understanding.

Example structure:

- [Vision Node](./vision_node.md)
   - The Vision Node provides an adaptive interface that is able to perform object detection and/or image segmentation on multiple cameras at the same time.
- [Distance to Objects](./distance_to_objects.md)
   - Calculates the distance to detected objects using camera data.
- [Traffic Light Detection](./traffic_light_detection.md)
   - Handles the detection and classification of traffic lights.
🧰 Tools
🪛 LanguageTool

[style] ~12-~12: As a shorter alternative for ‘able to’, consider using “can”.
Context: ...ode provides an adaptive interface that is able to perform object detection and/or image s...

(BE_ABLE_TO)

🪛 Markdownlint

12-12: Expected: 2; Actual: 3
Unordered list indentation

(MD007, ul-indent)


29-32: Fix typo and consider documentation structure for deprecated components.

There's a typo in the word "since" and the indentation is inconsistent.

Apply this fix:

- - [Efficient PS](./efficientps.md)
-     - Not used scince paf22 and never successfully tested
+ - [Efficient PS](./efficientps.md)
+   - Not used since paf22 and never successfully tested

Consider moving deprecated components to a separate deprecated.md file to keep the main README focused on active components while maintaining historical information.

🧰 Tools
🪛 Markdownlint

30-30: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)


32-32: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 92b0377 and b58e1b5.

⛔ Files ignored due to path filters (1)
  • doc/assets/perception/perception_overview.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • doc/perception/README.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
doc/perception/README.md

[uncategorized] ~5-~5: Possible missing comma found.
Context: ...erception components. In the following diagram you can see all the nodes and topics us...

(AI_HYDRA_LEO_MISSING_COMMA)


[style] ~12-~12: As a shorter alternative for ‘able to’, consider using “can”.
Context: ...ode provides an adaptive interface that is able to perform object detection and/or image s...

(BE_ABLE_TO)

🪛 Markdownlint
doc/perception/README.md

12-12: Expected: 2; Actual: 3
Unordered list indentation

(MD007, ul-indent)


30-30: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)


32-32: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)

🔇 Additional comments (2)
doc/perception/README.md (2)

36-36: LGTM!

The experiments section provides a clear reference to the performance evaluations documentation.


18-25: Verify the localization documentation structure.

The section references a separate localization file. Let's ensure all referenced files exist and are properly linked.

✅ Verification successful

Localization documentation structure is verified.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify existence of all referenced markdown files
echo "Checking referenced documentation files..."
for file in localization.md kalman_filter.md position_heading_publisher_node.md position_heading_filter_debug_node.md coordinate_transformation.md; do
  if [ -f "doc/perception/$file" ]; then
    echo "$file exists"
  else
    echo "✗ Missing: $file"
  fi
done

Length of output: 1519

@Toni2go Toni2go force-pushed the 500-general-add-an-overview-of-the-perception-in-the-documentation branch from b58e1b5 to 18318b4 Compare November 13, 2024 10:31
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
doc/perception/README.md (3)

3-6: Polish the introduction text

Consider these grammatical improvements:

-This folder contains documentation of the perception components.
-
-In the following diagram you can see all the nodes and topics used in the perception to show how they work together.
+This folder contains documentation of the perception components.
+
+The following diagram shows all nodes and topics used in perception and illustrates how they work together.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~5-~5: Possible missing comma found.
Context: ...erception components. In the following diagram you can see all the nodes and topics us...

(AI_HYDRA_LEO_MISSING_COMMA)


11-16: Consider adding brief descriptions for all components

The Vision Node entry includes a helpful description, but other components lack descriptions. Consider adding brief descriptions for all components to maintain consistency and improve documentation clarity.

Example format:

 - [Vision Node](./vision_node.md)
   - The Vision Node provides an adaptive interface that is able to perform object detection and/or image segmentation on multiple cameras at the same time.
 - [Distance to Objects](./distance_to_objects.md)
+  - Calculates distances to detected objects using camera data.
 - [Traffic Light Detection](./traffic_light_detection.md)
+  - Processes images to detect and classify traffic light states.
🧰 Tools
🪛 LanguageTool

[style] ~12-~12: As a shorter alternative for ‘able to’, consider using “can”.
Context: ...ode provides an adaptive interface that is able to perform object detection and/or image s...

(BE_ABLE_TO)


29-32: Fix typo in version note

 - [Lidar Distance Utility](./lidar_distance_utility.md)
   - Not used since paf22
 - [Efficient PS](./efficientps.md)
-  - Not used scince paf22 and never successfully tested
+  - Not used since paf22 and never successfully tested
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between b58e1b5 and 18318b4.

⛔ Files ignored due to path filters (1)
  • doc/assets/perception/perception_overview.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • doc/perception/README.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
doc/perception/README.md

[uncategorized] ~5-~5: Possible missing comma found.
Context: ...erception components. In the following diagram you can see all the nodes and topics us...

(AI_HYDRA_LEO_MISSING_COMMA)


[style] ~12-~12: As a shorter alternative for ‘able to’, consider using “can”.
Context: ...ode provides an adaptive interface that is able to perform object detection and/or image s...

(BE_ABLE_TO)


[uncategorized] ~20-~20: The preposition “of” seems more likely in this position.
Context: ...cture.md) ## Localization An overview over the different nodes working together to...

(AI_EN_LECTOR_REPLACEMENT_PREPOSITION)

🔇 Additional comments (3)
doc/perception/README.md (3)

18-25: LGTM! Well-organized localization section

The section is well-structured with clear component separation and appropriate references. The helper functions annotation is particularly helpful.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~20-~20: The preposition “of” seems more likely in this position.
Context: ...cture.md) ## Localization An overview over the different nodes working together to...

(AI_EN_LECTOR_REPLACEMENT_PREPOSITION)


1-36: Great job meeting PR objectives!

The changes successfully achieve the main goals:

  1. Added the perception overview diagram
  2. Improved documentation structure with clear sections
  3. Enhanced readability with better organization

The documentation is now more maintainable and user-friendly.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~5-~5: Possible missing comma found.
Context: ...erception components. In the following diagram you can see all the nodes and topics us...

(AI_HYDRA_LEO_MISSING_COMMA)


[style] ~12-~12: As a shorter alternative for ‘able to’, consider using “can”.
Context: ...ode provides an adaptive interface that is able to perform object detection and/or image s...

(BE_ABLE_TO)


[uncategorized] ~20-~20: The preposition “of” seems more likely in this position.
Context: ...cture.md) ## Localization An overview over the different nodes working together to...

(AI_EN_LECTOR_REPLACEMENT_PREPOSITION)


7-7: Verify the diagram file existence

The diagram is a crucial addition that meets the PR objectives. Let's verify its existence and location.

✅ Verification successful

✓ Diagram exists and is properly located

  • File: doc/assets/perception/perception_overview.png (744K)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify the existence of the perception overview diagram
if [ -f "doc/assets/perception/perception_overview.png" ]; then
    echo "✓ Diagram exists"
    file "doc/assets/perception/perception_overview.png"
else
    echo "✗ Diagram not found at specified path"
fi

Length of output: 226


Script:

#!/bin/bash
# Verify the existence and details of the perception overview diagram
if [ -f "doc/assets/perception/perception_overview.png" ]; then
    echo "✓ Diagram exists"
    ls -lh "doc/assets/perception/perception_overview.png"
else
    echo "✗ Diagram not found at specified path"
fi

Length of output: 251

Copy link
Collaborator

@asamluka asamluka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good visualization of the perception sector.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not obvious where Center/image, Back/image... come from. Maybe you can add the camera-module to the graph

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you suggest a box similar to the one used for localization? I'm not sure if the graph would then still be consistent with the other topics from the other sensors. Do you think the topics from all sensors should get an own color?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was more like referring to the source of the topics. Like for the 'Center/dist_array' and co, you can trace back, that the values somehow come from 'LIDAR' (over the 'lidar_distance' node), which is a CARLA module. Because now it seems like, the 'Camera/image' and co just come out of thin air. To solve I would just add "Camera" and four arrows to 'Center/image' and co.

## Overview Localization
This folder contains documentation of the perception components.

In the following diagram you can see all the nodes and topics used in the perception to show how they work together.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In the following diagram you can see all the nodes and topics used in the perception to show how they work together.
In the following diagram you can see all the nodes and topics used in the perception, to show how they work together.

## Object Detection / Distance and Segmentation

- [Vision Node](./vision_node.md)
- The Vision Node provides an adaptive interface that is able to perform object detection and/or image segmentation on multiple cameras at the same time.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "and/or" mean in this case? Are there restrictions to the simultaneous execution?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just copied this summary sentence from the "old" README file. The other nodes don't get a summary so I wasn't sure if I should leave it in but as the vision_node is quite important I thought a short summary might be nice. I think some models that can be selected in the node can only do object detection or image segmentation while others can do both, right? I think that's why "and/or" was used.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As of now, the vision node is capable of providing object detection via the NN with YOLO/Ultralytics and image segmentation for the traffic lights. As far as I know. So I would write 'and'.
But at the same time, I would clarify, that at the current state it is not possible to use multiple cameras (correct me if I'm wrong)

Copy link
Collaborator

@asamluka asamluka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some more clarification needed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was more like referring to the source of the topics. Like for the 'Center/dist_array' and co, you can trace back, that the values somehow come from 'LIDAR' (over the 'lidar_distance' node), which is a CARLA module. Because now it seems like, the 'Camera/image' and co just come out of thin air. To solve I would just add "Camera" and four arrows to 'Center/image' and co.

## Object Detection / Distance and Segmentation

- [Vision Node](./vision_node.md)
- The Vision Node provides an adaptive interface that is able to perform object detection and/or image segmentation on multiple cameras at the same time.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As of now, the vision node is capable of providing object detection via the NN with YOLO/Ultralytics and image segmentation for the traffic lights. As far as I know. So I would write 'and'.
But at the same time, I would clarify, that at the current state it is not possible to use multiple cameras (correct me if I'm wrong)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (3)
doc/perception/README.md (3)

1-5: Minor grammatical improvements needed

Consider adding a comma after "perception components" to improve readability.

-This folder contains documentation of the perception components.
+This folder contains documentation of the perception components,
🧰 Tools
🪛 LanguageTool

[uncategorized] ~5-~5: Possible missing comma found.
Context: ...erception components. In the following diagram you can see all the nodes and topics us...

(AI_HYDRA_LEO_MISSING_COMMA)


19-23: Improve grammar in localization overview

Replace "over" with "of" for better English usage.

-An overview over the different nodes working together
+An overview of the different nodes working together
🧰 Tools
🪛 LanguageTool

[uncategorized] ~21-~21: The preposition “of” seems more likely in this position.
Context: ...cture.md) ## Localization An overview over the different nodes working together to...

(AI_EN_LECTOR_REPLACEMENT_PREPOSITION)


33-33: Fix typo in unused files section

Correct the spelling of "since".

-- Not used scince paf22 and never successfully tested
++ Not used since paf22 and never successfully tested
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 18318b4 and b506f0e.

⛔ Files ignored due to path filters (1)
  • doc/assets/perception/perception_overview.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • doc/perception/README.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
doc/perception/README.md

[uncategorized] ~5-~5: Possible missing comma found.
Context: ...erception components. In the following diagram you can see all the nodes and topics us...

(AI_HYDRA_LEO_MISSING_COMMA)


[style] ~12-~12: As a shorter alternative for ‘able to’, consider using “can”.
Context: ...ode provides an adaptive interface that is able to perform object detection and image segm...

(BE_ABLE_TO)


[style] ~13-~13: For conciseness, consider replacing this expression with an adverb.
Context: ...rceiving the front of the car is active at the moment). - [Distance to Objects](./distance_to...

(AT_THE_MOMENT)


[uncategorized] ~21-~21: The preposition “of” seems more likely in this position.
Context: ...cture.md) ## Localization An overview over the different nodes working together to...

(AI_EN_LECTOR_REPLACEMENT_PREPOSITION)

🔇 Additional comments (2)
doc/perception/README.md (2)

37-37: LGTM!

The experiments section provides a clear reference to the detailed documentation.


5-7: Verify the diagram file and its rendering

Please ensure that:

  1. The diagram file exists at the specified path
  2. The diagram renders correctly in the documentation
✅ Verification successful

Diagram file exists and is properly placed

The perception overview diagram exists at the correct path (doc/assets/perception/perception_overview.png) and appears to be a valid PNG file with a size of 828,193 bytes. The relative path in the markdown file is correct, and the file is properly placed within the documentation assets structure.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the existence and accessibility of the perception overview diagram

# Check if the diagram file exists
if [ -f "doc/assets/perception/perception_overview.png" ]; then
    echo "Diagram file exists"
    # Check if it's a valid PNG file
    file "doc/assets/perception/perception_overview.png"
else
    echo "Diagram file not found"
fi

Length of output: 255


Script:

#!/bin/bash
# Let's try a different approach to verify the diagram path and check nearby files

# List files in the assets directory to verify the structure
echo "Checking assets directory structure:"
fd . "doc/assets"

# Check if the perception directory exists and list its contents
echo -e "\nChecking perception directory contents:"
ls -la doc/assets/perception/ 2>/dev/null || echo "Perception directory not found"

# Try to find any PNG files related to perception
echo -e "\nSearching for perception-related PNG files:"
fd -e png . | grep -i "perception"

Length of output: 10102

🧰 Tools
🪛 LanguageTool

[uncategorized] ~5-~5: Possible missing comma found.
Context: ...erception components. In the following diagram you can see all the nodes and topics us...

(AI_HYDRA_LEO_MISSING_COMMA)

Comment on lines +11 to +13
- [Vision Node](./vision_node.md)
- The Vision Node provides an adaptive interface that is able to perform object detection and image segmentation on multiple cameras at the same time
(even though only one camera perceiving the front of the car is active at the moment).
Copy link
Contributor

@coderabbitai coderabbitai bot Dec 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Clarify Vision Node capabilities

Based on previous discussions, the description should be more precise about the current capabilities:

-- The Vision Node provides an adaptive interface that is able to perform object detection and image segmentation on multiple cameras at the same time
-  (even though only one camera perceiving the front of the car is active at the moment).
++ The Vision Node provides an adaptive interface that can perform object detection and image segmentation
+  (currently configured for a single front-facing camera).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- [Vision Node](./vision_node.md)
- The Vision Node provides an adaptive interface that is able to perform object detection and image segmentation on multiple cameras at the same time
(even though only one camera perceiving the front of the car is active at the moment).
- [Vision Node](./vision_node.md)
- The Vision Node provides an adaptive interface that can perform object detection and image segmentation
(currently configured for a single front-facing camera).
🧰 Tools
🪛 LanguageTool

[style] ~12-~12: As a shorter alternative for ‘able to’, consider using “can”.
Context: ...ode provides an adaptive interface that is able to perform object detection and image segm...

(BE_ABLE_TO)


[style] ~13-~13: For conciseness, consider replacing this expression with an adverb.
Context: ...rceiving the front of the car is active at the moment). - [Distance to Objects](./distance_to...

(AT_THE_MOMENT)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You basically just rephrased the sentence. The centrally meaning is the same.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@asamluka Understood. Thank you for the feedback.

Copy link
Collaborator

@asamluka asamluka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The readme is now more meaningful and covers all relevant informations about the perception.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

[General]: Add an overview of the perception in the documentation
2 participants