Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

560 feature implement yolop lane detection model #583

Open
wants to merge 27 commits into
base: main
Choose a base branch
from

Conversation

Lukasnol
Copy link
Collaborator

@Lukasnol Lukasnol commented Dec 18, 2024

Description

Created node for lane detection via YOLOP Model. It publishes a lane mask and driveable area mask
Created documentation for the node.
updated requirement.txt to include necessary libraries

Type of change

  • New feature (non-breaking change which adds functionality)

Most important changes

added lane detection

Summary by CodeRabbit

  • New Features

    • Introduced lane detection functionality with a new node in the perception system.
    • Added new properties and configurations for improved visualization in the RViz environment.
    • Documented the functionality of the new Lanedetection Node, detailing its inputs, processing steps, and outputs.
  • Bug Fixes

    • Adjusted various display panel settings for better user experience in RViz.
  • Chores

    • Updated dependencies in the requirements file to include new libraries.

Lukasnol and others added 20 commits November 18, 2024 13:56
…feature-implement-clrernet-model-for-lane-detection-2
…el-for-lane-detection-2

Feature implement clrernet model for lane detection 2
@Lukasnol Lukasnol linked an issue Dec 18, 2024 that may be closed by this pull request
Copy link
Contributor

coderabbitai bot commented Dec 18, 2024

Walkthrough

This pull request introduces a comprehensive enhancement to the perception system's lane detection capabilities. The changes span multiple files, including a new RViz configuration, a perception launch file, a lane detection node implementation, updated requirements, and documentation. The primary focus is on implementing a YOLOP-based lane and driveable area detection system, with supporting configuration and dependency updates to enable this new functionality.

Changes

File Change Summary
code/agent/config/rviz_config.rviz Significant modifications to RViz display settings, including window dimensions, view angles, and enabling/disabling various visualization components
code/perception/launch/perception.launch Added new lane detection node and a commented-out debug wrapper node
code/perception/src/Lanedetection_node.py New implementation of a lane detection ROS node using YOLOP model, with methods for image processing, lane detection, and data publishing
code/requirements.txt Added new dependencies including yacs, matplotlib, Pillow, PyYAML, and prefetch_generator
doc/perception/Lanedetection_node.md New documentation detailing the lane detection node's inputs, processing steps, and outputs

Sequence Diagram

sequenceDiagram
    participant Camera as Camera Input
    participant LiDAR as LiDAR Distance Array
    participant LaneNode as Lanedetection Node
    participant Mask as Lane/Driveable Area Mask
    participant Marker as Visualization Marker

    Camera->>LaneNode: Send Image Message
    LiDAR->>LaneNode: Send Distance Array
    LaneNode->>LaneNode: Preprocess Image
    LaneNode->>LaneNode: Detect Lanes (YOLOP Model)
    LaneNode->>Mask: Publish Lane Mask
    LaneNode->>Mask: Publish Driveable Area Mask
    LaneNode->>Marker: Create Visualization Marker
    LaneNode->>Marker: Publish Marker Array
Loading

Possibly Related PRs

Poem

🚗 A Rabbit's Lane Detection Delight 🛣️

With YOLOP's magic and sensors so bright,
Lanes emerge from pixels, a computational might!
Markers dancing, masks unfurled,
Our perception system conquers the world!
Hop along, technology's gleaming knight! 🐰✨

Tip

CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (8)
code/perception/src/Lanedetection_node.py (2)

5-19: Remove unused imports to clean up the code.

Static analysis indicates the following imports are unused and should be removed or commented out if they are intended for future use:
• rospy
• DBSCAN (from sklearn.cluster)
• StandardScaler (from sklearn.preprocessing)
• Marker, MarkerArray (from visualization_msgs.msg)

- import rospy
...
- from sklearn.cluster import DBSCAN
- from sklearn.preprocessing import StandardScaler
- from visualization_msgs.msg import Marker, MarkerArray
🧰 Tools
🪛 Ruff (0.8.2)

5-5: rospy imported but unused

Remove unused import: rospy

(F401)


17-17: sklearn.cluster.DBSCAN imported but unused

Remove unused import: sklearn.cluster.DBSCAN

(F401)


18-18: sklearn.preprocessing.StandardScaler imported but unused

Remove unused import: sklearn.preprocessing.StandardScaler

(F401)


19-19: visualization_msgs.msg.Marker imported but unused

Remove unused import

(F401)


19-19: visualization_msgs.msg.MarkerArray imported but unused

Remove unused import

(F401)


35-36: Consider adding a CPU fallback.

The node currently assumes a CUDA device. It may fail on systems without GPUs. Consider adding a CPU fallback or providing a parameter to allow CPU usage when GPU is unavailable.

code/agent/config/rviz_config.rviz (1)

396-396: Correct the display name spelling.

The key "VisonNode Output" seems to be missing an 'i' (“Vison” → “Vision”).

- Name: VisonNode Output
+ Name: VisionNode Output
code/perception/src/lane_position.py (2)

8-8: Remove unused NumPy import.

The NumPy import isn’t used anywhere in the file. Clean up or comment it out if it will be used later.

- import numpy as np
🧰 Tools
🪛 Ruff (0.8.2)

8-8: numpy imported but unused

Remove unused import: numpy

(F401)


36-37: Harmonize node naming.

The node is initialized with "Lanedetection_node" but the class is named "lane_position". Consider renaming for clarity and consistency, or confirm that using different names is intentional.

doc/perception/Lanedetection_node.md (2)

3-3: Fix typo in overview section

The word "briev" should be "brief".

-This Document gives a briev overview over the Lanedetection Node.
+This Document gives a brief overview of the Lanedetection Node.

62-64: Fix formatting in outputs section

There's an extra bullet point in the MarkerArray description.

3. **visualization_marker_array_lane**
    - **Type**: MarkerArray
-    - - **Description**: Contains all Lidar points that collide with the lane mask.
+    - **Description**: Contains all Lidar points that collide with the lane mask.
code/perception/launch/perception.launch (1)

87-109: Consider documenting node dependencies and timing requirements

The launch file would benefit from:

  1. Comments describing the expected execution order of the perception nodes
  2. Documentation of topic dependencies between Lanedetection and lane_position nodes
  3. Any timing requirements or synchronization needs

This will help future maintainers understand the perception pipeline's architecture.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eabc86e and 72c0641.

⛔ Files ignored due to path filters (1)
  • doc/assets/perception/Overview_lanedetection_node.jpg is excluded by !**/*.jpg
📒 Files selected for processing (6)
  • code/agent/config/rviz_config.rviz (5 hunks)
  • code/perception/launch/perception.launch (1 hunks)
  • code/perception/src/Lanedetection_node.py (1 hunks)
  • code/perception/src/lane_position.py (1 hunks)
  • code/requirements.txt (1 hunks)
  • doc/perception/Lanedetection_node.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
doc/perception/Lanedetection_node.md

[uncategorized] ~4-~4: This verb may not be in the correct form. Consider using a different form for this context.
Context: ...and driveable Area in a given Image and publish the results. --- ## Lanedetection Nod...

(AI_EN_LECTOR_REPLACEMENT_VERB_FORM)

🪛 Ruff (0.8.2)
code/perception/src/lane_position.py

8-8: numpy imported but unused

Remove unused import: numpy

(F401)

code/perception/src/Lanedetection_node.py

5-5: rospy imported but unused

Remove unused import: rospy

(F401)


17-17: sklearn.cluster.DBSCAN imported but unused

Remove unused import: sklearn.cluster.DBSCAN

(F401)


18-18: sklearn.preprocessing.StandardScaler imported but unused

Remove unused import: sklearn.preprocessing.StandardScaler

(F401)


19-19: visualization_msgs.msg.Marker imported but unused

Remove unused import

(F401)


19-19: visualization_msgs.msg.MarkerArray imported but unused

Remove unused import

(F401)

🔇 Additional comments (4)
code/perception/src/Lanedetection_node.py (1)

138-138: Verify usage of self.dist_arrays.

The code sets self.dist_arrays but does not appear to use it elsewhere. Confirm if this data is needed, or remove it to reduce memory usage.

✅ Verification successful

Based on the search results, I can generate the final response as we have sufficient information about the usage of self.dist_arrays.

self.dist_arrays is actively used in distance calculations

The attribute is used in vision_node.py to store depth information from LiDAR and is later accessed to calculate distances for detected objects. Specifically:

  • It's checked for availability (if self.dist_arrays is None)
  • Used to extract distance measurements for detected objects using array slicing
  • The assignment in Lanedetection_node.py appears to be inherited behavior from vision_node.py
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for references to self.dist_arrays to ensure it is indeed unused
rg --context 3 $'self\\.dist_arrays'

Length of output: 2368

doc/perception/Lanedetection_node.md (2)

13-22: LGTM: Input section is well-documented

The input section clearly describes both required inputs with their types and purposes.


52-64: LGTM: Output section is comprehensive

The output section clearly describes all three outputs with their types and purposes, aligning well with the processing steps.

code/perception/launch/perception.launch (1)

106-108: LGTM: Lane position node configuration

The lane_position node is properly configured and follows the same pattern as other perception nodes in the launch file.

Comment on lines 20 to 30
tqdm
yacs
Cython
matplotlib>=3.2.2
Pillow
PyYAML>=5.3
tensorboardX
seaborn
prefetch_generator
imageio
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Several dependencies lack version constraints, posing potential stability and security risks

The following dependencies need version pinning or minimum version constraints:

  • tqdm: No version constraint
  • yacs: No version constraint
  • Cython: No version constraint
  • Pillow: No version constraint (critical for security)
  • tensorboardX: No version constraint
  • seaborn: No version constraint
  • prefetch_generator: No version constraint
  • imageio: No version constraint

While some dependencies have proper constraints (matplotlib>=3.2.2, PyYAML>=5.3), the others should be similarly constrained to ensure reproducible builds and avoid potential compatibility issues or security vulnerabilities. Pillow, in particular, frequently has security updates and should be version-pinned.

🔗 Analysis chain

Confirm pinned versions or security advisories for newly added dependencies.

While adding new dependencies is necessary for the YOLOP-based feature, ensure that pinned versions (or at least version lower bounds) are established for production stability and to avoid potential incompatibilities. Check for any known security advisories.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Example checks for newly added dependencies
rg -A 3 "tqdm|yacs|Cython|matplotlib|Pillow|PyYAML|tensorboardX|seaborn|prefetch_generator|imageio" requirements.txt

Length of output: 175


Script:

#!/bin/bash
# Let's first find the requirements file as it might be in a different location
fd requirements.txt

# Then check if there are any version constraints in the found files
fd requirements.txt --exec cat {}

Length of output: 714

class lane_position(CompatibleNode):

def __init__(self, name, **kwargs):
super().__init(name, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix the call to the superclass constructor.

Typing error: missing double underscores in init. The correct method name is init(), not __init().

- super().__init(name, **kwargs)
+ super().__init__(name, **kwargs)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
super().__init(name, **kwargs)
super().__init__(name, **kwargs)

Comment on lines 90 to 97
<node pkg="perception" type="debug_wrapper.py" name="Lanedetection" output="screen" args="--debug_node=Lanedetection_node.py --debug_port=53000 --debug_wait">
<param name="role_name" value="$(arg role_name)" />
</node>
<!--
<node pkg="perception" type="Lanedetection_node.py" name="Lanedetection" output="screen">
<param name="role_name" value="$(arg role_name)" />
</node>
-->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Port 53000 conflicts with lidar_distance node's debug configuration

The debug port 53000 is already used by the lidar_distance node as shown in the documentation example. Please change the Lanedetection node's debug port to a different value in the range 53000-53100 to avoid conflicts.

  • Update debug_port=53000 to a different value (e.g., debug_port=53001) in code/perception/launch/perception.launch
🔗 Analysis chain

Verify debug configuration for Lanedetection node

The debug wrapper configuration looks good, but please verify:

  1. That port 53000 doesn't conflict with other debug nodes
  2. The debug wait flag won't impact the node's startup sequence

Let me check for other debug ports in use to ensure there are no potential conflicts.


Let me try a different approach to find debug port configurations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for port conflicts in other launch files
rg -l "debug_port=53000" | grep "\.launch$"

Length of output: 84


Script:

#!/bin/bash
# Search for all debug port configurations in launch files
rg "debug_port=\d+" --type launch -A 2 -B 2

Length of output: 76


Script:

#!/bin/bash
# Search for all debug port configurations in any file
rg "debug_port=" -A 2 -B 2

Length of output: 1445

@una-auxme una-auxme deleted a comment from coderabbitai bot Dec 18, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (5)
code/perception/src/Lanedetection_node.py (5)

18-24: Enhance class documentation with more details

The class documentation should include:

  • Input topics and their message types
  • Output topics and their message types
  • Configuration parameters (e.g., role_name)
  • Performance characteristics and hardware requirements (CUDA)

88-90: Fix incorrect topic documentation

The documentation for driveable area publisher shows incorrect topic path:

  • Documentation shows: /Center/lane_mask
  • Actual implementation uses: /paf/{role_name}/Center/driveable_area

57-62: Document QoS profile choice and consider making it configurable

The QoS profile is hardcoded to 1 across all publishers and subscribers without explanation. Consider:

  1. Documenting why QoS=1 was chosen
  2. Making it configurable through ROS parameters

Also applies to: 70-75, 81-85, 91-95


220-224: Use ROS logging instead of print statements

Replace print statement with ROS logger:

-    print("Lanedetection_node started")
+    node.get_logger().info("Lanedetection_node started")

1-224: Consider adding performance monitoring

For a computationally intensive node like this, consider:

  1. Adding performance metrics (processing time, GPU memory usage)
  2. Publishing these metrics to ROS topics for monitoring
  3. Implementing adaptive processing based on system load

Example metrics to track:

  • Image processing pipeline latency
  • GPU memory usage
  • Queue sizes for subscribers

Would you like me to provide an implementation example for performance monitoring?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 72c0641 and 2bba06b.

📒 Files selected for processing (3)
  • code/perception/launch/perception.launch (1 hunks)
  • code/perception/src/Lanedetection_node.py (1 hunks)
  • code/requirements.txt (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • code/requirements.txt
  • code/perception/launch/perception.launch

Comment on lines +103 to +104
if self.device == "cuda":
torch.cuda.empty_cache()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix CUDA memory management condition

The CUDA memory management condition is incorrect:

-        if self.device == "cuda":
+        if self.device.type == "cuda":
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if self.device == "cuda":
torch.cuda.empty_cache()
if self.device.type == "cuda":
torch.cuda.empty_cache()

Comment on lines +30 to +33
self.model = torch.hub.load("hustvl/yolop", "yolop", pretrained=True)

self.device = torch.device("cuda")
self.model.to(self.device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for model loading and CUDA availability

The model loading and CUDA setup need proper error handling:

  1. Model loading could fail due to network issues
  2. CUDA might not be available on all systems

Consider this implementation:

-        self.model = torch.hub.load("hustvl/yolop", "yolop", pretrained=True)
-        self.device = torch.device("cuda")
-        self.model.to(self.device)
+        try:
+            self.model = torch.hub.load("hustvl/yolop", "yolop", pretrained=True)
+            self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+            self.model.to(self.device)
+            self.get_logger().info(f"Model loaded successfully on {self.device}")
+        except Exception as e:
+            self.get_logger().error(f"Failed to load model: {str(e)}")
+            raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.model = torch.hub.load("hustvl/yolop", "yolop", pretrained=True)
self.device = torch.device("cuda")
self.model.to(self.device)
try:
self.model = torch.hub.load("hustvl/yolop", "yolop", pretrained=True)
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.model.to(self.device)
self.get_logger().info(f"Model loaded successfully on {self.device}")
except Exception as e:
self.get_logger().error(f"Failed to load model: {str(e)}")
raise

Comment on lines +202 to +203
lane_threshold = np.mean(ll_seg_resized) + 0.07
driveable_area_threshold = np.mean(da_seg_resized) + 0.07
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Extract magic numbers into configurable parameters

The threshold calculations use hardcoded values:

-        lane_threshold = np.mean(ll_seg_resized) + 0.07
-        driveable_area_threshold = np.mean(da_seg_resized) + 0.07
+        self.declare_parameter('lane_threshold_offset', 0.07)
+        self.declare_parameter('driveable_area_threshold_offset', 0.07)
+        lane_threshold = np.mean(ll_seg_resized) + self.get_parameter('lane_threshold_offset').value
+        driveable_area_threshold = np.mean(da_seg_resized) + self.get_parameter('driveable_area_threshold_offset').value

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +106 to +111
image, original_image = self.preprocess_image(ImageMsg)
self.original_h, self.original_w, _ = original_image.shape
with torch.no_grad():
image = image.to(self.device)
_, da_seg_out, ll_seg_out = self.detect_lanes(image)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for image processing pipeline

The image processing pipeline lacks error handling for potential failures:

  1. Image preprocessing could fail
  2. Model inference could raise exceptions

Consider this implementation:

-        image, original_image = self.preprocess_image(ImageMsg)
-        self.original_h, self.original_w, _ = original_image.shape
-        with torch.no_grad():
-            image = image.to(self.device)
-            _, da_seg_out, ll_seg_out = self.detect_lanes(image)
+        try:
+            image, original_image = self.preprocess_image(ImageMsg)
+            self.original_h, self.original_w, _ = original_image.shape
+            with torch.no_grad():
+                image = image.to(self.device)
+                _, da_seg_out, ll_seg_out = self.detect_lanes(image)
+        except Exception as e:
+            self.get_logger().error(f"Failed to process image: {str(e)}")
+            return
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
image, original_image = self.preprocess_image(ImageMsg)
self.original_h, self.original_w, _ = original_image.shape
with torch.no_grad():
image = image.to(self.device)
_, da_seg_out, ll_seg_out = self.detect_lanes(image)
try:
image, original_image = self.preprocess_image(ImageMsg)
self.original_h, self.original_w, _ = original_image.shape
with torch.no_grad():
image = image.to(self.device)
_, da_seg_out, ll_seg_out = self.detect_lanes(image)
except Exception as e:
self.get_logger().error(f"Failed to process image: {str(e)}")
return

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: implement YOLOP lane detection model
3 participants