Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

540 feature visualization of radar data #559

Merged
merged 9 commits into from
Dec 12, 2024

Conversation

Ralf524
Copy link
Collaborator

@Ralf524 Ralf524 commented Dec 11, 2024

Description

The clustered radar data can be visualized with different colored pointclouds, 2d bounding boxes or 3d bounding boxes

Fixes # (540)

Type of change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)

Does this PR introduce a breaking change?

No

Most important changes

Which files functionalities are most important in this PR. On which part should the reviewer be focussed on?
radar_node.py

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works (might be obsolete with CI later on)
  • New and existing unit tests pass locally with my changes (might be obsolete with CI later on)

Summary by CodeRabbit

  • New Features

    • Enhanced radar data processing and visualization capabilities.
    • Introduced filtering and clustering functionalities for radar data.
    • Added new methods for generating bounding boxes and markers.
    • Updated radar data output to include "Velocity" field.
    • Added new publishers for visualization and cluster information.
  • Bug Fixes

    • Updated data handling to improve accuracy in radar data representation.
  • Chores

    • Added new dependency pyopenssl to the project requirements.

@Ralf524 Ralf524 linked an issue Dec 11, 2024 that may be closed by this pull request
Copy link
Contributor

coderabbitai bot commented Dec 11, 2024

Walkthrough

This pull request introduces significant enhancements to the RadarNode class within radar_node.py, focusing on improved processing and visualization of radar data. Key changes include a restructured callback method that integrates new data processing steps, such as filtering and clustering using DBSCAN. Several new utility functions are added to enhance data visualization, including bounding box generation and marker management. Additionally, the requirements.txt file is updated to include a new dependency, pyopenssl, while confirming the version of debugpy.

Changes

File Change Summary
code/perception/src/radar_node.py - Restructured callback method for radar data processing.
- Added methods: filter_data, cluster_data, generate_bounding_boxes, create_bounding_box_marker, clear_old_markers, generate_cluster_info.
- Updated pointcloud2_to_array to return "Velocity".
- Added new publishers for visualization and cluster info.
code/requirements.txt - Confirmed debugpy version as 1.8.7.
- Added new dependency: pyopenssl==24.3.0.

Possibly related PRs

Suggested labels

perception

Suggested reviewers

  • michalal7
  • Lukasnol

Poem

In the radar's gentle sweep,
Data dances, secrets keep.
With bounding boxes, clear and bright,
Clusters form in the soft moonlight.
A new dependency joins the fray,
Enhancing our code in a joyful way! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Experiment)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Outside diff range and nitpick comments (4)
code/perception/src/radar_node.py (4)

192-230: Complete the docstring for create_pointcloud2 function.

The docstring for create_pointcloud2 is incomplete and contains placeholders like _summary_. Please provide a complete description of the function, its parameters, and its return value.

Apply this diff to update the docstring:

 def create_pointcloud2(clustered_points, cluster_labels):
-    """_summary_
+    """
+    Creates a `PointCloud2` message from clustered points with assigned colors.

     Args:
         clustered_points (np.ndarray): Array of clustered points with shape (N, 4).
         cluster_labels (np.ndarray): Array of cluster labels corresponding to each point.

     Returns:
         PointCloud2: A ROS `PointCloud2` message that can be published for visualization.
     """

232-249: Remove unused transform_data_to_2d function or implement it fully.

The function transform_data_to_2d seems unused in the code and has an incomplete docstring. If it's unnecessary, consider removing it to clean up the codebase. If it will be used later, complete its implementation and documentation.


251-287: Clean up commented-out code in calculate_aabb.

The function calculate_aabb contains commented-out code for 2D bounding boxes. To improve readability, consider removing unused code or refactoring it if needed elsewhere.


489-499: Ensure consistent handling of noise points (label -1) in generate_cluster_info.

In generate_cluster_info, you process clusters including those with label -1, which typically represent noise in DBSCAN. However, elsewhere in the code, label -1 is skipped. For consistency, decide whether to include or exclude noise points in cluster information.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 8ce1f52 and 2799448.

📒 Files selected for processing (2)
  • code/perception/src/radar_node.py (4 hunks)
  • code/requirements.txt (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • code/requirements.txt
🔇 Additional comments (2)
code/perception/src/radar_node.py (2)

175-178: 🛠️ Refactor suggestion

Avoid returning None in cluster_data when data is empty.

Returning None can lead to AttributeError when the caller tries to access attributes like labels_. Consider returning an empty DBSCAN object or handling the empty case separately.

Apply this diff to handle empty data:

     if len(data) == 0:
-        return None
+        clustered_points = DBSCAN()
+        clustered_points.labels_ = np.array([])
+        return clustered_points
     scaler = StandardScaler()
     data_scaled = scaler.fit_transform(data)
     clustered_points = DBSCAN(eps=eps, min_samples=min_samples).fit(data_scaled)

Likely invalid or redundant comment.


120-121: ⚠️ Potential issue

Verify the presence of the 'Velocity' field in pointcloud2_to_array.

The function pointcloud2_to_array assumes that the incoming PointCloud2 messages contain a 'Velocity' field. If this field is missing, it will raise a KeyError.

Run the following script to check if all PointCloud2 messages have the 'Velocity' field:

✅ Verification successful

Based on the documentation in radar_node.md, which explicitly lists 'Velocity' as one of the radar data output fields for each detected point, and the CARLA radar sensor configuration in dev_objects.json, I can now generate the final response:

The 'Velocity' field is a standard output of the CARLA radar sensor.

The radar node documentation explicitly confirms that 'Velocity' is one of the standard data outputs for each detected point from the radar sensor. This is consistent with the CARLA simulator's radar sensor implementation, which provides velocity measurements for detected objects.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that all PointCloud2 messages contain the 'Velocity' field.

# Test: Search for 'PointField' definitions in the message files. Expect: 'Velocity' field exists.
fd -e msg 'PointCloud2' | xargs cat | rg 'PointField.*Velocity'

# Alternatively, check sample messages for the 'Velocity' field.
# Note: Replace '/path/to/your/pointcloud_data' with the actual data directory.
rg 'Velocity' /path/to/your/pointcloud_data/*.pcd

Length of output: 191


Script:

#!/bin/bash
# Let's analyze the codebase more thoroughly

# First, let's find the implementation of pointcloud2_to_array
rg -l "pointcloud2_to_array" 

# Then, let's look for any PointCloud2 message definitions or field specifications
rg -l "PointCloud2"

# Let's also check for any radar-related message definitions that might contain velocity
fd -e msg . | xargs cat

# Let's examine the radar node implementation for context
cat code/perception/src/radar_node.py

Length of output: 18329


Script:

#!/bin/bash
# Let's check the CARLA radar sensor configuration and message definitions
rg -A 5 "RADAR" code/perception/src/radar_node.py

# Let's also check if there's any documentation about the radar data format
fd -e md . | xargs rg -l "radar"

# Let's examine any radar-related configuration files
fd -e yaml -e json . | xargs rg -l "radar"

Length of output: 1545


Script:

#!/bin/bash
# Let's check the radar node documentation and configuration
cat doc/perception/radar_node.md

# Let's also examine the radar implementation in CARLA documentation
cat code/agent/config/dev_objects.json

# Let's look for any velocity-related fields in the codebase
rg -B 2 -A 2 "Velocity" code/perception/src/radar_node.py

Length of output: 7370

Comment on lines 460 to 463
for marker in marker_array.markers:
if marker.id >= max_id:
marker.action = Marker.DELETE
return marker_array
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct the marker deletion logic in clear_old_markers.

The current logic in clear_old_markers may not delete old markers correctly. The condition if marker.id >= max_id: should be adjusted to delete markers with IDs greater than max_id.

Apply this diff to fix the condition:

     for marker in marker_array.markers:
-        if marker.id >= max_id:
+        if marker.id > max_id:
             marker.action = Marker.DELETE
     return marker_array
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for marker in marker_array.markers:
if marker.id >= max_id:
marker.action = Marker.DELETE
return marker_array
for marker in marker_array.markers:
if marker.id > max_id:
marker.action = Marker.DELETE
return marker_array

Comment on lines +214 to +218
if label == -1:
r, g, b = 128, 128, 128
else:
r, g, b = colors[label]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle potential IndexError when assigning colors in create_pointcloud2.

In the create_pointcloud2 function, the line r, g, b = colors[label] may raise an IndexError if label exceeds the length of the colors array, especially if labels are not sequential or start from -1.

Apply this diff to ensure labels map correctly to colors:

         unique_labels = np.unique(cluster_labels)
         colors = generate_color_map(len(unique_labels))
+        label_color_map = {label: colors[i] for i, label in enumerate(unique_labels if label != -1)}

         for i, point in enumerate(clustered_points):
             x, y, z, v = point
             label = cluster_labels[i]

             if label == -1:
                 r, g, b = 128, 128, 128
             else:
-                r, g, b = colors[label]
+                r, g, b = label_color_map.get(label, (255, 255, 255))  # Default to white if label not in map

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 162 to 174
def cluster_data(data, eps=0.8, min_samples=3):
"""_summary_

Args:
data (np.ndarray): data array which should be clustered
eps (float, optional): maximum distance of points. Defaults to 0.8.
min_samples (int, optional): min samples for 1 cluster. Defaults to 3.

Returns:
dict: A dictionary where the keys are cluster labels (int) and the values
are the number of points in each cluster. Returns an empty dictionary
if no points are available.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Complete the docstring and correct the return type in cluster_data.

The docstring for the cluster_data function is incomplete and contains placeholders like _summary_. Additionally, the stated return type is incorrect; the function returns a DBSCAN object, not a dictionary.

Apply this diff to update the docstring:

 def cluster_data(data, eps=0.8, min_samples=3):
-    """_summary_
+    """
+    Clusters the radar data using the DBSCAN algorithm.

     Args:
         data (np.ndarray): Data array to be clustered.
         eps (float, optional): Maximum distance between points to be considered in the same neighborhood. Defaults to 0.8.
         min_samples (int, optional): Minimum number of samples required to form a dense region (cluster). Defaults to 3.

     Returns:
-        dict: A dictionary where the keys are cluster labels (int) and the values
-              are the number of points in each cluster. Returns an empty dictionary
-              if no points are available.
+        DBSCAN: A DBSCAN clustering object containing labels and core sample indices.
     """

     if len(data) == 0:
-        return {}
+        return None
     scaler = StandardScaler()
     data_scaled = scaler.fit_transform(data)
     clustered_points = DBSCAN(eps=eps, min_samples=min_samples).fit(data_scaled)

     return clustered_points
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def cluster_data(data, eps=0.8, min_samples=3):
"""_summary_
Args:
data (np.ndarray): data array which should be clustered
eps (float, optional): maximum distance of points. Defaults to 0.8.
min_samples (int, optional): min samples for 1 cluster. Defaults to 3.
Returns:
dict: A dictionary where the keys are cluster labels (int) and the values
are the number of points in each cluster. Returns an empty dictionary
if no points are available.
"""
def cluster_data(data, eps=0.8, min_samples=3):
"""
Clusters the radar data using the DBSCAN algorithm.
Args:
data (np.ndarray): Data array to be clustered.
eps (float, optional): Maximum distance between points to be considered in the same neighborhood. Defaults to 0.8.
min_samples (int, optional): Minimum number of samples required to form a dense region (cluster). Defaults to 3.
Returns:
DBSCAN: A DBSCAN clustering object containing labels and core sample indices.
"""

Comment on lines 124 to 147
def filter_data(data, min_x=-100, max_x=100, min_y=-100, max_y=100, min_z=-1, max_z=100,
max_distance=100):
"""
Filters and clusters points from a ROS PointCloud2 message based on DBSCAN
clustering.
Filters radar data based on specified spatial and distance constraints.

Parameters:
- pointcloud_msg: sensor_msgs/PointCloud2
The ROS PointCloud2 message containing the 3D points.
- max_distance: float
Maximum distance to consider points. Points beyond this distance are
discarded.
- eps: float, optional (default: 1.0)
The maximum distance between two points for them to be considered in
the same cluster.
- min_samples: int, optional (default: 2)
The minimum number of points required to form a cluster.
This function applies multiple filtering criteria to the input radar data.
Points outside these bounds are excluded from the output.

Args:
data (np.ndarray): A 2D numpy array containing radar data, where each row
represents a data point with the format [x, y, z, distance]. The array
shape is (N, 4), where N is the number of points.
min_x (float, optional): Minimum value for the x-coordinate. Default is -1.
max_x (float, optional): Maximum value for the x-coordinate. Default is 1.
min_y (float, optional): Minimum value for the y-coordinate. Default is 1.
max_y (float, optional): Maximum value for the y-coordinate. Default is 1.
min_z (float, optional): Minimum value for the z-coordinate. Default is -0.7.
max_z (float, optional): Maximum value for the z-coordinate. Default is 1.3.
max_distance (float, optional): Maximum allowable distance of the point from
the sensor. Default is 100.

Returns:
- dict
A dictionary where the keys are cluster labels (int) and the values
are the number of points in each cluster. Returns an empty dictionary
if no points are available.
np.ndarray: A numpy array containing only the filtered data points that meet
the specified criteria.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix inconsistent default parameter values in filter_data docstring.

The default parameter values stated in the docstring of filter_data do not match those in the function signature. For example, min_x defaults to -100 in the function, but the docstring mentions -1.

Apply this diff to correct the docstring:

 def filter_data(data, min_x=-100, max_x=100, min_y=-100, max_y=100, min_z=-1, max_z=100,
                 max_distance=100):
     """
     Filters radar data based on specified spatial and distance constraints.

     This function applies multiple filtering criteria to the input radar data.
     Points outside these bounds are excluded from the output.

     Args:
         data (np.ndarray): A 2D numpy array containing radar data, where each row 
             represents a data point with the format [x, y, z, Velocity]. The array 
             shape is (N, 4), where N is the number of points.
-        min_x (float, optional): Minimum value for the x-coordinate. Default is -1.
+        min_x (float, optional): Minimum value for the x-coordinate. Default is -100.
-        max_x (float, optional): Maximum value for the x-coordinate. Default is 1.
+        max_x (float, optional): Maximum value for the x-coordinate. Default is 100.
-        min_y (float, optional): Minimum value for the y-coordinate. Default is 1.
+        min_y (float, optional): Minimum value for the y-coordinate. Default is -100.
-        max_y (float, optional): Maximum value for the y-coordinate. Default is 1.
+        max_y (float, optional): Maximum value for the y-coordinate. Default is 100.
-        min_z (float, optional): Minimum value for the z-coordinate. Default is -0.7.
+        min_z (float, optional): Minimum value for the z-coordinate. Default is -1.
-        max_z (float, optional): Maximum value for the z-coordinate. Default is 1.3.
+        max_z (float, optional): Maximum value for the z-coordinate. Default is 100.
         max_distance (float, optional): Maximum allowable distance of the point from
             the sensor. Default is 100.

     Returns:
         np.ndarray: A numpy array containing only the filtered data points that meet 
             the specified criteria.
     """
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def filter_data(data, min_x=-100, max_x=100, min_y=-100, max_y=100, min_z=-1, max_z=100,
max_distance=100):
"""
Filters and clusters points from a ROS PointCloud2 message based on DBSCAN
clustering.
Filters radar data based on specified spatial and distance constraints.
Parameters:
- pointcloud_msg: sensor_msgs/PointCloud2
The ROS PointCloud2 message containing the 3D points.
- max_distance: float
Maximum distance to consider points. Points beyond this distance are
discarded.
- eps: float, optional (default: 1.0)
The maximum distance between two points for them to be considered in
the same cluster.
- min_samples: int, optional (default: 2)
The minimum number of points required to form a cluster.
This function applies multiple filtering criteria to the input radar data.
Points outside these bounds are excluded from the output.
Args:
data (np.ndarray): A 2D numpy array containing radar data, where each row
represents a data point with the format [x, y, z, distance]. The array
shape is (N, 4), where N is the number of points.
min_x (float, optional): Minimum value for the x-coordinate. Default is -1.
max_x (float, optional): Maximum value for the x-coordinate. Default is 1.
min_y (float, optional): Minimum value for the y-coordinate. Default is 1.
max_y (float, optional): Maximum value for the y-coordinate. Default is 1.
min_z (float, optional): Minimum value for the z-coordinate. Default is -0.7.
max_z (float, optional): Maximum value for the z-coordinate. Default is 1.3.
max_distance (float, optional): Maximum allowable distance of the point from
the sensor. Default is 100.
Returns:
- dict
A dictionary where the keys are cluster labels (int) and the values
are the number of points in each cluster. Returns an empty dictionary
if no points are available.
np.ndarray: A numpy array containing only the filtered data points that meet
the specified criteria.
def filter_data(data, min_x=-100, max_x=100, min_y=-100, max_y=100, min_z=-1, max_z=100,
max_distance=100):
"""
Filters radar data based on specified spatial and distance constraints.
This function applies multiple filtering criteria to the input radar data.
Points outside these bounds are excluded from the output.
Args:
data (np.ndarray): A 2D numpy array containing radar data, where each row
represents a data point with the format [x, y, z, Velocity]. The array
shape is (N, 4), where N is the number of points.
min_x (float, optional): Minimum value for the x-coordinate. Default is -100.
max_x (float, optional): Maximum value for the x-coordinate. Default is 100.
min_y (float, optional): Minimum value for the y-coordinate. Default is -100.
max_y (float, optional): Maximum value for the y-coordinate. Default is 100.
min_z (float, optional): Minimum value for the z-coordinate. Default is -1.
max_z (float, optional): Maximum value for the z-coordinate. Default is 100.
max_distance (float, optional): Maximum allowable distance of the point from
the sensor. Default is 100.
Returns:
np.ndarray: A numpy array containing only the filtered data points that meet
the specified criteria.
"""

Comment on lines 467 to 501
def generate_cluster_info(clusters, data, marker_array, bounding_boxes):
"""
Generates information about clusters, including the label, number of points,
markers, and bounding boxes.

Args:
clusters (DBSCAN): The clustered data, containing the labels for each point.
data (numpy.ndarray):
The point cloud data, typically with columns [x, y, z, distance].
marker_array (MarkerArray):
The array of RViz markers associated with the clusters.
bounding_boxes (list): The list of bounding boxes for each detected object.

Returns:
str: A JSON string containing the information about each cluster, including:
- "label": The cluster label.
- "points_count": The number of points in the cluster.
- "Anzahl marker": The number of markers in the MarkerArray.
- "Anzahl Boundingboxen": The number of bounding boxes.
"""
cluster_info = []

for label in set(clusters.labels_):
cluster_points = data[clusters.labels_ == label]
cluster_size = len(cluster_points)
if label != -1:
cluster_info.append({
"label": int(label),
"points_count": cluster_size,
"Anzahl marker": len(marker_array.markers),
"Anzahl Boundingboxen": len(bounding_boxes)
})

return json.dumps(cluster_info)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Standardize JSON keys to English in generate_cluster_info.

The function generate_cluster_info uses mixed languages in the JSON keys, such as "Anzahl marker" and "Anzahl Boundingboxen". For consistency and clarity, please standardize all keys to English.

Apply this diff to update the JSON keys:

             cluster_info.append({
                 "label": int(label),
                 "points_count": cluster_size,
-                "Anzahl marker": len(marker_array.markers),
-                "Anzahl Boundingboxen": len(bounding_boxes)
+                "num_markers": len(marker_array.markers),
+                "num_bounding_boxes": len(bounding_boxes)
             })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def generate_cluster_info(clusters, data, marker_array, bounding_boxes):
"""
Generates information about clusters, including the label, number of points,
markers, and bounding boxes.
Args:
clusters (DBSCAN): The clustered data, containing the labels for each point.
data (numpy.ndarray):
The point cloud data, typically with columns [x, y, z, distance].
marker_array (MarkerArray):
The array of RViz markers associated with the clusters.
bounding_boxes (list): The list of bounding boxes for each detected object.
Returns:
str: A JSON string containing the information about each cluster, including:
- "label": The cluster label.
- "points_count": The number of points in the cluster.
- "Anzahl marker": The number of markers in the MarkerArray.
- "Anzahl Boundingboxen": The number of bounding boxes.
"""
cluster_info = []
for label in set(clusters.labels_):
cluster_points = data[clusters.labels_ == label]
cluster_size = len(cluster_points)
if label != -1:
cluster_info.append({
"label": int(label),
"points_count": cluster_size,
"Anzahl marker": len(marker_array.markers),
"Anzahl Boundingboxen": len(bounding_boxes)
})
return json.dumps(cluster_info)
def generate_cluster_info(clusters, data, marker_array, bounding_boxes):
"""
Generates information about clusters, including the label, number of points,
markers, and bounding boxes.
Args:
clusters (DBSCAN): The clustered data, containing the labels for each point.
data (numpy.ndarray):
The point cloud data, typically with columns [x, y, z, distance].
marker_array (MarkerArray):
The array of RViz markers associated with the clusters.
bounding_boxes (list): The list of bounding boxes for each detected object.
Returns:
str: A JSON string containing the information about each cluster, including:
- "label": The cluster label.
- "points_count": The number of points in the cluster.
- "num_markers": The number of markers in the MarkerArray.
- "num_bounding_boxes": The number of bounding boxes.
"""
cluster_info = []
for label in set(clusters.labels_):
cluster_points = data[clusters.labels_ == label]
cluster_size = len(cluster_points)
if label != -1:
cluster_info.append({
"label": int(label),
"points_count": cluster_size,
"num_markers": len(marker_array.markers),
"num_bounding_boxes": len(bounding_boxes)
})
return json.dumps(cluster_info)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (1)
code/perception/src/radar_node.py (1)

242-248: Simplify the data transformation logic.

The point transformation can be simplified by using array slicing instead of individual assignments.

Apply this diff to simplify the transformation:

-    transformed_points = clustered_data
-    transformed_points[:, 0] = clustered_data[:, 0]
-    transformed_points[:, 1] = clustered_data[:, 1]
-    transformed_points[:, 2] = 0
-    transformed_points[:, 3] = clustered_data[:, 3]
+    transformed_points = clustered_data.copy()
+    transformed_points[:, 2] = 0
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 2799448 and 57e720d.

📒 Files selected for processing (1)
  • code/perception/src/radar_node.py (4 hunks)
🔇 Additional comments (4)
code/perception/src/radar_node.py (4)

124-159: Fix inconsistent default parameter values in docstring.

The default parameter values in the docstring don't match the function signature.


162-181: Complete the docstring and correct the return type.

The docstring is incomplete and the return type is incorrect. The function returns a DBSCAN object, not a dictionary.


214-218: Handle potential IndexError when assigning colors.

The color assignment could raise an IndexError if the label exceeds the length of the colors array.


467-501: Standardize JSON keys to English.

The function uses mixed languages in the JSON keys.

Comment on lines 75 to 95
self.visualization_radar_publisher = rospy.Publisher(
rospy.get_param(
"~image_distance_topic", "/paf/hero/Radar/Visualization"
),
PointCloud2,
queue_size=10,
)
self.marker_visualization_radar_publisher = rospy.Publisher(
rospy.get_param(
"~image_distance_topic", "/paf/hero/Radar/Marker"
),
MarkerArray,
queue_size=10,
)
self.cluster_info_radar_publisher = rospy.Publisher(
rospy.get_param(
"~image_distance_topic", "/paf/hero/Radar/ClusterInfo"
),
String,
queue_size=10,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update parameter names for different publisher topics.

All publishers are using the same parameter name "~image_distance_topic" but publishing to different topics. This could lead to confusion and potential misconfiguration.

Apply this diff to use distinct parameter names:

         self.visualization_radar_publisher = rospy.Publisher(
             rospy.get_param(
-                "~image_distance_topic", "/paf/hero/Radar/Visualization"
+                "~visualization_topic", "/paf/hero/Radar/Visualization"
             ),
             PointCloud2,
             queue_size=10,
         )
         self.marker_visualization_radar_publisher = rospy.Publisher(
             rospy.get_param(
-                "~image_distance_topic", "/paf/hero/Radar/Marker"
+                "~marker_topic", "/paf/hero/Radar/Marker"
             ),
             MarkerArray,
             queue_size=10,
         )
         self.cluster_info_radar_publisher = rospy.Publisher(
             rospy.get_param(
-                "~image_distance_topic", "/paf/hero/Radar/ClusterInfo"
+                "~cluster_info_topic", "/paf/hero/Radar/ClusterInfo"
             ),
             String,
             queue_size=10,
         )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.visualization_radar_publisher = rospy.Publisher(
rospy.get_param(
"~image_distance_topic", "/paf/hero/Radar/Visualization"
),
PointCloud2,
queue_size=10,
)
self.marker_visualization_radar_publisher = rospy.Publisher(
rospy.get_param(
"~image_distance_topic", "/paf/hero/Radar/Marker"
),
MarkerArray,
queue_size=10,
)
self.cluster_info_radar_publisher = rospy.Publisher(
rospy.get_param(
"~image_distance_topic", "/paf/hero/Radar/ClusterInfo"
),
String,
queue_size=10,
)
self.visualization_radar_publisher = rospy.Publisher(
rospy.get_param(
"~visualization_topic", "/paf/hero/Radar/Visualization"
),
PointCloud2,
queue_size=10,
)
self.marker_visualization_radar_publisher = rospy.Publisher(
rospy.get_param(
"~marker_topic", "/paf/hero/Radar/Marker"
),
MarkerArray,
queue_size=10,
)
self.cluster_info_radar_publisher = rospy.Publisher(
rospy.get_param(
"~cluster_info_topic", "/paf/hero/Radar/ClusterInfo"
),
String,
queue_size=10,
)

Comment on lines 31 to 63
dataarray = pointcloud2_to_array(data)

# radar position z=0.7
dataarray = filter_data(dataarray, min_z=-0.6)

clustered_data = cluster_data(dataarray)

# transformed_data = transform_data_to_2d(dataarray)

cloud = create_pointcloud2(dataarray, clustered_data.labels_)
self.visualization_radar_publisher.publish(cloud)

points_with_labels = np.hstack((dataarray,
clustered_data.labels_.reshape(-1, 1)))
bounding_boxes = generate_bounding_boxes(points_with_labels)

marker_array = MarkerArray()
for label, bbox in bounding_boxes:
if label != -1:
marker = create_bounding_box_marker(label, bbox)
marker_array.markers.append(marker)
# can be used for extra debugging
# min_marker, max_marker = create_min_max_markers(label, bbox)
# marker_array.markers.append(min_marker)
# marker_array.markers.append(max_marker)

marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1)

self.marker_visualization_radar_publisher.publish(marker_array)

cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array,
bounding_boxes)
self.cluster_info_radar_publisher.publish(cluster_info)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for ROS message processing.

The callback method should include try-except blocks to handle potential errors during message processing, such as malformed messages or processing failures.

Apply this diff to add error handling:

-        dataarray = pointcloud2_to_array(data)
-
-        # radar position z=0.7
-        dataarray = filter_data(dataarray, min_z=-0.6)
-
-        clustered_data = cluster_data(dataarray)
+        try:
+            dataarray = pointcloud2_to_array(data)
+
+            # radar position z=0.7
+            dataarray = filter_data(dataarray, min_z=-0.6)
+
+            clustered_data = cluster_data(dataarray)
+
+            cloud = create_pointcloud2(dataarray, clustered_data.labels_)
+            self.visualization_radar_publisher.publish(cloud)
+
+            points_with_labels = np.hstack((dataarray,
+                                          clustered_data.labels_.reshape(-1, 1)))
+            bounding_boxes = generate_bounding_boxes(points_with_labels)
+
+            marker_array = MarkerArray()
+            for label, bbox in bounding_boxes:
+                if label != -1:
+                    marker = create_bounding_box_marker(label, bbox)
+                    marker_array.markers.append(marker)
+
+            marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1)
+            self.marker_visualization_radar_publisher.publish(marker_array)
+
+            cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array,
+                                               bounding_boxes)
+            self.cluster_info_radar_publisher.publish(cluster_info)
+        except Exception as e:
+            rospy.logerr(f"Error processing radar data: {str(e)}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dataarray = pointcloud2_to_array(data)
# radar position z=0.7
dataarray = filter_data(dataarray, min_z=-0.6)
clustered_data = cluster_data(dataarray)
# transformed_data = transform_data_to_2d(dataarray)
cloud = create_pointcloud2(dataarray, clustered_data.labels_)
self.visualization_radar_publisher.publish(cloud)
points_with_labels = np.hstack((dataarray,
clustered_data.labels_.reshape(-1, 1)))
bounding_boxes = generate_bounding_boxes(points_with_labels)
marker_array = MarkerArray()
for label, bbox in bounding_boxes:
if label != -1:
marker = create_bounding_box_marker(label, bbox)
marker_array.markers.append(marker)
# can be used for extra debugging
# min_marker, max_marker = create_min_max_markers(label, bbox)
# marker_array.markers.append(min_marker)
# marker_array.markers.append(max_marker)
marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1)
self.marker_visualization_radar_publisher.publish(marker_array)
cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array,
bounding_boxes)
self.cluster_info_radar_publisher.publish(cluster_info)
try:
dataarray = pointcloud2_to_array(data)
# radar position z=0.7
dataarray = filter_data(dataarray, min_z=-0.6)
clustered_data = cluster_data(dataarray)
cloud = create_pointcloud2(dataarray, clustered_data.labels_)
self.visualization_radar_publisher.publish(cloud)
points_with_labels = np.hstack((dataarray,
clustered_data.labels_.reshape(-1, 1)))
bounding_boxes = generate_bounding_boxes(points_with_labels)
marker_array = MarkerArray()
for label, bbox in bounding_boxes:
if label != -1:
marker = create_bounding_box_marker(label, bbox)
marker_array.markers.append(marker)
marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1)
self.marker_visualization_radar_publisher.publish(marker_array)
cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array,
bounding_boxes)
self.cluster_info_radar_publisher.publish(cluster_info)
except Exception as e:
rospy.logerr(f"Error processing radar data: {str(e)}")

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (4)
code/perception/src/radar_node.py (4)

88-88: Fix parameter name typo.

The parameter name clusterInfo_topic_topic appears to have a duplicate "topic" suffix.

-            rospy.get_param("~clusterInfo_topic_topic", "/paf/hero/Radar/ClusterInfo"),
+            rospy.get_param("~cluster_info_topic", "/paf/hero/Radar/ClusterInfo"),

166-180: Fix inconsistent docstring in cluster_data function.

The docstring's return type description doesn't match the actual return type (DBSCAN object).

     """
     Clusters the radar data using the DBSCAN algorithm
 
     Args:
         data (np.ndarray): data array which should be clustered
         eps (float, optional): maximum distance of points. Defaults to 0.8.
         min_samples (int, optional): min samples for 1 cluster. Defaults to 3.
 
     Returns:
-        dict: A dictionary where the keys are cluster labels (int) and the values
-              are the number of points in each cluster. Returns an empty dictionary
-              if no points are available.
-        DBSCAN: A DBSCAN clustering object containing labels and core sample indices
+        DBSCAN: A DBSCAN clustering object containing labels and core sample indices.
+               Returns None if no points are available.
     """

198-207: Complete the docstring in create_pointcloud2 function.

The docstring is incomplete with placeholders.

-    """_summary_
+    """
+    Creates a PointCloud2 message for visualization with colored points based on cluster labels.
 
     Args:
-        clustered_points (dict): clustered points after dbscan
-        cluster_labels (_type_): _description_
+        clustered_points (np.ndarray): Array of points with format [x, y, z, velocity]
+        cluster_labels (np.ndarray): Array of cluster labels for each point
 
     Returns:
         PointCloud2: pointcloud which can be published
     """

31-65: Consider memory optimization for large point clouds.

The current implementation keeps multiple copies of the point cloud data in memory (original, filtered, with labels). For large point clouds, this could lead to high memory usage.

Consider:

  1. Processing points in batches for very large point clouds
  2. Using in-place operations where possible
  3. Clearing intermediate arrays when no longer needed
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 57e720d and 40a4595.

📒 Files selected for processing (1)
  • code/perception/src/radar_node.py (4 hunks)
🔇 Additional comments (2)
code/perception/src/radar_node.py (2)

216-224: ⚠️ Potential issue

Fix potential IndexError in color assignment.

The color assignment colors[label] could raise an IndexError if the label is larger than the number of generated colors.

     unique_labels = np.unique(cluster_labels)
     colors = generate_color_map(len(unique_labels))
+    label_to_color = {label: colors[i] for i, label in enumerate(unique_labels) if label != -1}
 
     for i, point in enumerate(clustered_points):
         x, y, z, v = point
         label = cluster_labels[i]
 
         if label == -1:
             r, g, b = 128, 128, 128
         else:
-            r, g, b = colors[label]
+            r, g, b = label_to_color.get(label, (255, 255, 255))  # Default to white if label not found

Likely invalid or redundant comment.


31-65: 🛠️ Refactor suggestion

Add error handling to the callback method.

The callback method processes complex data transformations and publishes to multiple topics, but lacks error handling. This could lead to node crashes if any step fails.

Apply this diff to add error handling:

-        dataarray = pointcloud2_to_array(data)
-        dataarray = filter_data(dataarray, min_z=-0.6)
-        clustered_data = cluster_data(dataarray)
-        cloud = create_pointcloud2(dataarray, clustered_data.labels_)
-        self.visualization_radar_publisher.publish(cloud)
+        try:
+            dataarray = pointcloud2_to_array(data)
+            dataarray = filter_data(dataarray, min_z=-0.6)
+            
+            if len(dataarray) == 0:
+                rospy.logwarn("No valid radar points after filtering")
+                return
+                
+            clustered_data = cluster_data(dataarray)
+            cloud = create_pointcloud2(dataarray, clustered_data.labels_)
+            self.visualization_radar_publisher.publish(cloud)
+            
+            points_with_labels = np.hstack((dataarray, clustered_data.labels_.reshape(-1, 1)))
+            bounding_boxes = generate_bounding_boxes(points_with_labels)
+            
+            marker_array = MarkerArray()
+            for label, bbox in bounding_boxes:
+                if label != -1:
+                    marker = create_bounding_box_marker(label, bbox)
+                    marker_array.markers.append(marker)
+            
+            marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1)
+            self.marker_visualization_radar_publisher.publish(marker_array)
+            
+            cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array, bounding_boxes)
+            self.cluster_info_radar_publisher.publish(cluster_info)
+        except Exception as e:
+            rospy.logerr(f"Error processing radar data: {str(e)}")

Likely invalid or redundant comment.

@Ralf524 Ralf524 merged commit 89d0551 into main Dec 12, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: Visualization of radar data
2 participants