-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
540 feature visualization of radar data #559
Conversation
WalkthroughThis pull request introduces significant enhancements to the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (4)
code/perception/src/radar_node.py (4)
192-230
: Complete the docstring forcreate_pointcloud2
function.The docstring for
create_pointcloud2
is incomplete and contains placeholders like_summary_
. Please provide a complete description of the function, its parameters, and its return value.Apply this diff to update the docstring:
def create_pointcloud2(clustered_points, cluster_labels): - """_summary_ + """ + Creates a `PointCloud2` message from clustered points with assigned colors. Args: clustered_points (np.ndarray): Array of clustered points with shape (N, 4). cluster_labels (np.ndarray): Array of cluster labels corresponding to each point. Returns: PointCloud2: A ROS `PointCloud2` message that can be published for visualization. """
232-249
: Remove unusedtransform_data_to_2d
function or implement it fully.The function
transform_data_to_2d
seems unused in the code and has an incomplete docstring. If it's unnecessary, consider removing it to clean up the codebase. If it will be used later, complete its implementation and documentation.
251-287
: Clean up commented-out code incalculate_aabb
.The function
calculate_aabb
contains commented-out code for 2D bounding boxes. To improve readability, consider removing unused code or refactoring it if needed elsewhere.
489-499
: Ensure consistent handling of noise points (label-1
) ingenerate_cluster_info
.In
generate_cluster_info
, you process clusters including those with label-1
, which typically represent noise in DBSCAN. However, elsewhere in the code, label-1
is skipped. For consistency, decide whether to include or exclude noise points in cluster information.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
code/perception/src/radar_node.py
(4 hunks)code/requirements.txt
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- code/requirements.txt
🔇 Additional comments (2)
code/perception/src/radar_node.py (2)
175-178
: 🛠️ Refactor suggestion
Avoid returning None
in cluster_data
when data is empty.
Returning None
can lead to AttributeError
when the caller tries to access attributes like labels_
. Consider returning an empty DBSCAN object or handling the empty case separately.
Apply this diff to handle empty data:
if len(data) == 0:
- return None
+ clustered_points = DBSCAN()
+ clustered_points.labels_ = np.array([])
+ return clustered_points
scaler = StandardScaler()
data_scaled = scaler.fit_transform(data)
clustered_points = DBSCAN(eps=eps, min_samples=min_samples).fit(data_scaled)
Likely invalid or redundant comment.
120-121
:
Verify the presence of the 'Velocity' field in pointcloud2_to_array
.
The function pointcloud2_to_array
assumes that the incoming PointCloud2
messages contain a 'Velocity' field. If this field is missing, it will raise a KeyError
.
Run the following script to check if all PointCloud2
messages have the 'Velocity' field:
✅ Verification successful
Based on the documentation in radar_node.md
, which explicitly lists 'Velocity' as one of the radar data output fields for each detected point, and the CARLA radar sensor configuration in dev_objects.json
, I can now generate the final response:
The 'Velocity' field is a standard output of the CARLA radar sensor.
The radar node documentation explicitly confirms that 'Velocity' is one of the standard data outputs for each detected point from the radar sensor. This is consistent with the CARLA simulator's radar sensor implementation, which provides velocity measurements for detected objects.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all PointCloud2 messages contain the 'Velocity' field.
# Test: Search for 'PointField' definitions in the message files. Expect: 'Velocity' field exists.
fd -e msg 'PointCloud2' | xargs cat | rg 'PointField.*Velocity'
# Alternatively, check sample messages for the 'Velocity' field.
# Note: Replace '/path/to/your/pointcloud_data' with the actual data directory.
rg 'Velocity' /path/to/your/pointcloud_data/*.pcd
Length of output: 191
Script:
#!/bin/bash
# Let's analyze the codebase more thoroughly
# First, let's find the implementation of pointcloud2_to_array
rg -l "pointcloud2_to_array"
# Then, let's look for any PointCloud2 message definitions or field specifications
rg -l "PointCloud2"
# Let's also check for any radar-related message definitions that might contain velocity
fd -e msg . | xargs cat
# Let's examine the radar node implementation for context
cat code/perception/src/radar_node.py
Length of output: 18329
Script:
#!/bin/bash
# Let's check the CARLA radar sensor configuration and message definitions
rg -A 5 "RADAR" code/perception/src/radar_node.py
# Let's also check if there's any documentation about the radar data format
fd -e md . | xargs rg -l "radar"
# Let's examine any radar-related configuration files
fd -e yaml -e json . | xargs rg -l "radar"
Length of output: 1545
Script:
#!/bin/bash
# Let's check the radar node documentation and configuration
cat doc/perception/radar_node.md
# Let's also examine the radar implementation in CARLA documentation
cat code/agent/config/dev_objects.json
# Let's look for any velocity-related fields in the codebase
rg -B 2 -A 2 "Velocity" code/perception/src/radar_node.py
Length of output: 7370
code/perception/src/radar_node.py
Outdated
for marker in marker_array.markers: | ||
if marker.id >= max_id: | ||
marker.action = Marker.DELETE | ||
return marker_array |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the marker deletion logic in clear_old_markers
.
The current logic in clear_old_markers
may not delete old markers correctly. The condition if marker.id >= max_id:
should be adjusted to delete markers with IDs greater than max_id
.
Apply this diff to fix the condition:
for marker in marker_array.markers:
- if marker.id >= max_id:
+ if marker.id > max_id:
marker.action = Marker.DELETE
return marker_array
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for marker in marker_array.markers: | |
if marker.id >= max_id: | |
marker.action = Marker.DELETE | |
return marker_array | |
for marker in marker_array.markers: | |
if marker.id > max_id: | |
marker.action = Marker.DELETE | |
return marker_array |
if label == -1: | ||
r, g, b = 128, 128, 128 | ||
else: | ||
r, g, b = colors[label] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential IndexError when assigning colors in create_pointcloud2
.
In the create_pointcloud2
function, the line r, g, b = colors[label]
may raise an IndexError
if label
exceeds the length of the colors
array, especially if labels
are not sequential or start from -1
.
Apply this diff to ensure labels map correctly to colors:
unique_labels = np.unique(cluster_labels)
colors = generate_color_map(len(unique_labels))
+ label_color_map = {label: colors[i] for i, label in enumerate(unique_labels if label != -1)}
for i, point in enumerate(clustered_points):
x, y, z, v = point
label = cluster_labels[i]
if label == -1:
r, g, b = 128, 128, 128
else:
- r, g, b = colors[label]
+ r, g, b = label_color_map.get(label, (255, 255, 255)) # Default to white if label not in map
Committable suggestion skipped: line range outside the PR's diff.
code/perception/src/radar_node.py
Outdated
def cluster_data(data, eps=0.8, min_samples=3): | ||
"""_summary_ | ||
|
||
Args: | ||
data (np.ndarray): data array which should be clustered | ||
eps (float, optional): maximum distance of points. Defaults to 0.8. | ||
min_samples (int, optional): min samples for 1 cluster. Defaults to 3. | ||
|
||
Returns: | ||
dict: A dictionary where the keys are cluster labels (int) and the values | ||
are the number of points in each cluster. Returns an empty dictionary | ||
if no points are available. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete the docstring and correct the return type in cluster_data
.
The docstring for the cluster_data
function is incomplete and contains placeholders like _summary_
. Additionally, the stated return type is incorrect; the function returns a DBSCAN object, not a dictionary.
Apply this diff to update the docstring:
def cluster_data(data, eps=0.8, min_samples=3):
- """_summary_
+ """
+ Clusters the radar data using the DBSCAN algorithm.
Args:
data (np.ndarray): Data array to be clustered.
eps (float, optional): Maximum distance between points to be considered in the same neighborhood. Defaults to 0.8.
min_samples (int, optional): Minimum number of samples required to form a dense region (cluster). Defaults to 3.
Returns:
- dict: A dictionary where the keys are cluster labels (int) and the values
- are the number of points in each cluster. Returns an empty dictionary
- if no points are available.
+ DBSCAN: A DBSCAN clustering object containing labels and core sample indices.
"""
if len(data) == 0:
- return {}
+ return None
scaler = StandardScaler()
data_scaled = scaler.fit_transform(data)
clustered_points = DBSCAN(eps=eps, min_samples=min_samples).fit(data_scaled)
return clustered_points
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def cluster_data(data, eps=0.8, min_samples=3): | |
"""_summary_ | |
Args: | |
data (np.ndarray): data array which should be clustered | |
eps (float, optional): maximum distance of points. Defaults to 0.8. | |
min_samples (int, optional): min samples for 1 cluster. Defaults to 3. | |
Returns: | |
dict: A dictionary where the keys are cluster labels (int) and the values | |
are the number of points in each cluster. Returns an empty dictionary | |
if no points are available. | |
""" | |
def cluster_data(data, eps=0.8, min_samples=3): | |
""" | |
Clusters the radar data using the DBSCAN algorithm. | |
Args: | |
data (np.ndarray): Data array to be clustered. | |
eps (float, optional): Maximum distance between points to be considered in the same neighborhood. Defaults to 0.8. | |
min_samples (int, optional): Minimum number of samples required to form a dense region (cluster). Defaults to 3. | |
Returns: | |
DBSCAN: A DBSCAN clustering object containing labels and core sample indices. | |
""" |
code/perception/src/radar_node.py
Outdated
def filter_data(data, min_x=-100, max_x=100, min_y=-100, max_y=100, min_z=-1, max_z=100, | ||
max_distance=100): | ||
""" | ||
Filters and clusters points from a ROS PointCloud2 message based on DBSCAN | ||
clustering. | ||
Filters radar data based on specified spatial and distance constraints. | ||
|
||
Parameters: | ||
- pointcloud_msg: sensor_msgs/PointCloud2 | ||
The ROS PointCloud2 message containing the 3D points. | ||
- max_distance: float | ||
Maximum distance to consider points. Points beyond this distance are | ||
discarded. | ||
- eps: float, optional (default: 1.0) | ||
The maximum distance between two points for them to be considered in | ||
the same cluster. | ||
- min_samples: int, optional (default: 2) | ||
The minimum number of points required to form a cluster. | ||
This function applies multiple filtering criteria to the input radar data. | ||
Points outside these bounds are excluded from the output. | ||
|
||
Args: | ||
data (np.ndarray): A 2D numpy array containing radar data, where each row | ||
represents a data point with the format [x, y, z, distance]. The array | ||
shape is (N, 4), where N is the number of points. | ||
min_x (float, optional): Minimum value for the x-coordinate. Default is -1. | ||
max_x (float, optional): Maximum value for the x-coordinate. Default is 1. | ||
min_y (float, optional): Minimum value for the y-coordinate. Default is 1. | ||
max_y (float, optional): Maximum value for the y-coordinate. Default is 1. | ||
min_z (float, optional): Minimum value for the z-coordinate. Default is -0.7. | ||
max_z (float, optional): Maximum value for the z-coordinate. Default is 1.3. | ||
max_distance (float, optional): Maximum allowable distance of the point from | ||
the sensor. Default is 100. | ||
|
||
Returns: | ||
- dict | ||
A dictionary where the keys are cluster labels (int) and the values | ||
are the number of points in each cluster. Returns an empty dictionary | ||
if no points are available. | ||
np.ndarray: A numpy array containing only the filtered data points that meet | ||
the specified criteria. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix inconsistent default parameter values in filter_data
docstring.
The default parameter values stated in the docstring of filter_data
do not match those in the function signature. For example, min_x
defaults to -100
in the function, but the docstring mentions -1
.
Apply this diff to correct the docstring:
def filter_data(data, min_x=-100, max_x=100, min_y=-100, max_y=100, min_z=-1, max_z=100,
max_distance=100):
"""
Filters radar data based on specified spatial and distance constraints.
This function applies multiple filtering criteria to the input radar data.
Points outside these bounds are excluded from the output.
Args:
data (np.ndarray): A 2D numpy array containing radar data, where each row
represents a data point with the format [x, y, z, Velocity]. The array
shape is (N, 4), where N is the number of points.
- min_x (float, optional): Minimum value for the x-coordinate. Default is -1.
+ min_x (float, optional): Minimum value for the x-coordinate. Default is -100.
- max_x (float, optional): Maximum value for the x-coordinate. Default is 1.
+ max_x (float, optional): Maximum value for the x-coordinate. Default is 100.
- min_y (float, optional): Minimum value for the y-coordinate. Default is 1.
+ min_y (float, optional): Minimum value for the y-coordinate. Default is -100.
- max_y (float, optional): Maximum value for the y-coordinate. Default is 1.
+ max_y (float, optional): Maximum value for the y-coordinate. Default is 100.
- min_z (float, optional): Minimum value for the z-coordinate. Default is -0.7.
+ min_z (float, optional): Minimum value for the z-coordinate. Default is -1.
- max_z (float, optional): Maximum value for the z-coordinate. Default is 1.3.
+ max_z (float, optional): Maximum value for the z-coordinate. Default is 100.
max_distance (float, optional): Maximum allowable distance of the point from
the sensor. Default is 100.
Returns:
np.ndarray: A numpy array containing only the filtered data points that meet
the specified criteria.
"""
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def filter_data(data, min_x=-100, max_x=100, min_y=-100, max_y=100, min_z=-1, max_z=100, | |
max_distance=100): | |
""" | |
Filters and clusters points from a ROS PointCloud2 message based on DBSCAN | |
clustering. | |
Filters radar data based on specified spatial and distance constraints. | |
Parameters: | |
- pointcloud_msg: sensor_msgs/PointCloud2 | |
The ROS PointCloud2 message containing the 3D points. | |
- max_distance: float | |
Maximum distance to consider points. Points beyond this distance are | |
discarded. | |
- eps: float, optional (default: 1.0) | |
The maximum distance between two points for them to be considered in | |
the same cluster. | |
- min_samples: int, optional (default: 2) | |
The minimum number of points required to form a cluster. | |
This function applies multiple filtering criteria to the input radar data. | |
Points outside these bounds are excluded from the output. | |
Args: | |
data (np.ndarray): A 2D numpy array containing radar data, where each row | |
represents a data point with the format [x, y, z, distance]. The array | |
shape is (N, 4), where N is the number of points. | |
min_x (float, optional): Minimum value for the x-coordinate. Default is -1. | |
max_x (float, optional): Maximum value for the x-coordinate. Default is 1. | |
min_y (float, optional): Minimum value for the y-coordinate. Default is 1. | |
max_y (float, optional): Maximum value for the y-coordinate. Default is 1. | |
min_z (float, optional): Minimum value for the z-coordinate. Default is -0.7. | |
max_z (float, optional): Maximum value for the z-coordinate. Default is 1.3. | |
max_distance (float, optional): Maximum allowable distance of the point from | |
the sensor. Default is 100. | |
Returns: | |
- dict | |
A dictionary where the keys are cluster labels (int) and the values | |
are the number of points in each cluster. Returns an empty dictionary | |
if no points are available. | |
np.ndarray: A numpy array containing only the filtered data points that meet | |
the specified criteria. | |
def filter_data(data, min_x=-100, max_x=100, min_y=-100, max_y=100, min_z=-1, max_z=100, | |
max_distance=100): | |
""" | |
Filters radar data based on specified spatial and distance constraints. | |
This function applies multiple filtering criteria to the input radar data. | |
Points outside these bounds are excluded from the output. | |
Args: | |
data (np.ndarray): A 2D numpy array containing radar data, where each row | |
represents a data point with the format [x, y, z, Velocity]. The array | |
shape is (N, 4), where N is the number of points. | |
min_x (float, optional): Minimum value for the x-coordinate. Default is -100. | |
max_x (float, optional): Maximum value for the x-coordinate. Default is 100. | |
min_y (float, optional): Minimum value for the y-coordinate. Default is -100. | |
max_y (float, optional): Maximum value for the y-coordinate. Default is 100. | |
min_z (float, optional): Minimum value for the z-coordinate. Default is -1. | |
max_z (float, optional): Maximum value for the z-coordinate. Default is 100. | |
max_distance (float, optional): Maximum allowable distance of the point from | |
the sensor. Default is 100. | |
Returns: | |
np.ndarray: A numpy array containing only the filtered data points that meet | |
the specified criteria. | |
""" |
code/perception/src/radar_node.py
Outdated
def generate_cluster_info(clusters, data, marker_array, bounding_boxes): | ||
""" | ||
Generates information about clusters, including the label, number of points, | ||
markers, and bounding boxes. | ||
|
||
Args: | ||
clusters (DBSCAN): The clustered data, containing the labels for each point. | ||
data (numpy.ndarray): | ||
The point cloud data, typically with columns [x, y, z, distance]. | ||
marker_array (MarkerArray): | ||
The array of RViz markers associated with the clusters. | ||
bounding_boxes (list): The list of bounding boxes for each detected object. | ||
|
||
Returns: | ||
str: A JSON string containing the information about each cluster, including: | ||
- "label": The cluster label. | ||
- "points_count": The number of points in the cluster. | ||
- "Anzahl marker": The number of markers in the MarkerArray. | ||
- "Anzahl Boundingboxen": The number of bounding boxes. | ||
""" | ||
cluster_info = [] | ||
|
||
for label in set(clusters.labels_): | ||
cluster_points = data[clusters.labels_ == label] | ||
cluster_size = len(cluster_points) | ||
if label != -1: | ||
cluster_info.append({ | ||
"label": int(label), | ||
"points_count": cluster_size, | ||
"Anzahl marker": len(marker_array.markers), | ||
"Anzahl Boundingboxen": len(bounding_boxes) | ||
}) | ||
|
||
return json.dumps(cluster_info) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Standardize JSON keys to English in generate_cluster_info
.
The function generate_cluster_info
uses mixed languages in the JSON keys, such as "Anzahl marker"
and "Anzahl Boundingboxen"
. For consistency and clarity, please standardize all keys to English.
Apply this diff to update the JSON keys:
cluster_info.append({
"label": int(label),
"points_count": cluster_size,
- "Anzahl marker": len(marker_array.markers),
- "Anzahl Boundingboxen": len(bounding_boxes)
+ "num_markers": len(marker_array.markers),
+ "num_bounding_boxes": len(bounding_boxes)
})
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def generate_cluster_info(clusters, data, marker_array, bounding_boxes): | |
""" | |
Generates information about clusters, including the label, number of points, | |
markers, and bounding boxes. | |
Args: | |
clusters (DBSCAN): The clustered data, containing the labels for each point. | |
data (numpy.ndarray): | |
The point cloud data, typically with columns [x, y, z, distance]. | |
marker_array (MarkerArray): | |
The array of RViz markers associated with the clusters. | |
bounding_boxes (list): The list of bounding boxes for each detected object. | |
Returns: | |
str: A JSON string containing the information about each cluster, including: | |
- "label": The cluster label. | |
- "points_count": The number of points in the cluster. | |
- "Anzahl marker": The number of markers in the MarkerArray. | |
- "Anzahl Boundingboxen": The number of bounding boxes. | |
""" | |
cluster_info = [] | |
for label in set(clusters.labels_): | |
cluster_points = data[clusters.labels_ == label] | |
cluster_size = len(cluster_points) | |
if label != -1: | |
cluster_info.append({ | |
"label": int(label), | |
"points_count": cluster_size, | |
"Anzahl marker": len(marker_array.markers), | |
"Anzahl Boundingboxen": len(bounding_boxes) | |
}) | |
return json.dumps(cluster_info) | |
def generate_cluster_info(clusters, data, marker_array, bounding_boxes): | |
""" | |
Generates information about clusters, including the label, number of points, | |
markers, and bounding boxes. | |
Args: | |
clusters (DBSCAN): The clustered data, containing the labels for each point. | |
data (numpy.ndarray): | |
The point cloud data, typically with columns [x, y, z, distance]. | |
marker_array (MarkerArray): | |
The array of RViz markers associated with the clusters. | |
bounding_boxes (list): The list of bounding boxes for each detected object. | |
Returns: | |
str: A JSON string containing the information about each cluster, including: | |
- "label": The cluster label. | |
- "points_count": The number of points in the cluster. | |
- "num_markers": The number of markers in the MarkerArray. | |
- "num_bounding_boxes": The number of bounding boxes. | |
""" | |
cluster_info = [] | |
for label in set(clusters.labels_): | |
cluster_points = data[clusters.labels_ == label] | |
cluster_size = len(cluster_points) | |
if label != -1: | |
cluster_info.append({ | |
"label": int(label), | |
"points_count": cluster_size, | |
"num_markers": len(marker_array.markers), | |
"num_bounding_boxes": len(bounding_boxes) | |
}) | |
return json.dumps(cluster_info) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (1)
code/perception/src/radar_node.py (1)
242-248
: Simplify the data transformation logic.The point transformation can be simplified by using array slicing instead of individual assignments.
Apply this diff to simplify the transformation:
- transformed_points = clustered_data - transformed_points[:, 0] = clustered_data[:, 0] - transformed_points[:, 1] = clustered_data[:, 1] - transformed_points[:, 2] = 0 - transformed_points[:, 3] = clustered_data[:, 3] + transformed_points = clustered_data.copy() + transformed_points[:, 2] = 0
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
code/perception/src/radar_node.py
(4 hunks)
🔇 Additional comments (4)
code/perception/src/radar_node.py (4)
124-159
: Fix inconsistent default parameter values in docstring.
The default parameter values in the docstring don't match the function signature.
162-181
: Complete the docstring and correct the return type.
The docstring is incomplete and the return type is incorrect. The function returns a DBSCAN object, not a dictionary.
214-218
: Handle potential IndexError when assigning colors.
The color assignment could raise an IndexError if the label exceeds the length of the colors array.
467-501
: Standardize JSON keys to English.
The function uses mixed languages in the JSON keys.
code/perception/src/radar_node.py
Outdated
self.visualization_radar_publisher = rospy.Publisher( | ||
rospy.get_param( | ||
"~image_distance_topic", "/paf/hero/Radar/Visualization" | ||
), | ||
PointCloud2, | ||
queue_size=10, | ||
) | ||
self.marker_visualization_radar_publisher = rospy.Publisher( | ||
rospy.get_param( | ||
"~image_distance_topic", "/paf/hero/Radar/Marker" | ||
), | ||
MarkerArray, | ||
queue_size=10, | ||
) | ||
self.cluster_info_radar_publisher = rospy.Publisher( | ||
rospy.get_param( | ||
"~image_distance_topic", "/paf/hero/Radar/ClusterInfo" | ||
), | ||
String, | ||
queue_size=10, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update parameter names for different publisher topics.
All publishers are using the same parameter name "~image_distance_topic"
but publishing to different topics. This could lead to confusion and potential misconfiguration.
Apply this diff to use distinct parameter names:
self.visualization_radar_publisher = rospy.Publisher(
rospy.get_param(
- "~image_distance_topic", "/paf/hero/Radar/Visualization"
+ "~visualization_topic", "/paf/hero/Radar/Visualization"
),
PointCloud2,
queue_size=10,
)
self.marker_visualization_radar_publisher = rospy.Publisher(
rospy.get_param(
- "~image_distance_topic", "/paf/hero/Radar/Marker"
+ "~marker_topic", "/paf/hero/Radar/Marker"
),
MarkerArray,
queue_size=10,
)
self.cluster_info_radar_publisher = rospy.Publisher(
rospy.get_param(
- "~image_distance_topic", "/paf/hero/Radar/ClusterInfo"
+ "~cluster_info_topic", "/paf/hero/Radar/ClusterInfo"
),
String,
queue_size=10,
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
self.visualization_radar_publisher = rospy.Publisher( | |
rospy.get_param( | |
"~image_distance_topic", "/paf/hero/Radar/Visualization" | |
), | |
PointCloud2, | |
queue_size=10, | |
) | |
self.marker_visualization_radar_publisher = rospy.Publisher( | |
rospy.get_param( | |
"~image_distance_topic", "/paf/hero/Radar/Marker" | |
), | |
MarkerArray, | |
queue_size=10, | |
) | |
self.cluster_info_radar_publisher = rospy.Publisher( | |
rospy.get_param( | |
"~image_distance_topic", "/paf/hero/Radar/ClusterInfo" | |
), | |
String, | |
queue_size=10, | |
) | |
self.visualization_radar_publisher = rospy.Publisher( | |
rospy.get_param( | |
"~visualization_topic", "/paf/hero/Radar/Visualization" | |
), | |
PointCloud2, | |
queue_size=10, | |
) | |
self.marker_visualization_radar_publisher = rospy.Publisher( | |
rospy.get_param( | |
"~marker_topic", "/paf/hero/Radar/Marker" | |
), | |
MarkerArray, | |
queue_size=10, | |
) | |
self.cluster_info_radar_publisher = rospy.Publisher( | |
rospy.get_param( | |
"~cluster_info_topic", "/paf/hero/Radar/ClusterInfo" | |
), | |
String, | |
queue_size=10, | |
) |
code/perception/src/radar_node.py
Outdated
dataarray = pointcloud2_to_array(data) | ||
|
||
# radar position z=0.7 | ||
dataarray = filter_data(dataarray, min_z=-0.6) | ||
|
||
clustered_data = cluster_data(dataarray) | ||
|
||
# transformed_data = transform_data_to_2d(dataarray) | ||
|
||
cloud = create_pointcloud2(dataarray, clustered_data.labels_) | ||
self.visualization_radar_publisher.publish(cloud) | ||
|
||
points_with_labels = np.hstack((dataarray, | ||
clustered_data.labels_.reshape(-1, 1))) | ||
bounding_boxes = generate_bounding_boxes(points_with_labels) | ||
|
||
marker_array = MarkerArray() | ||
for label, bbox in bounding_boxes: | ||
if label != -1: | ||
marker = create_bounding_box_marker(label, bbox) | ||
marker_array.markers.append(marker) | ||
# can be used for extra debugging | ||
# min_marker, max_marker = create_min_max_markers(label, bbox) | ||
# marker_array.markers.append(min_marker) | ||
# marker_array.markers.append(max_marker) | ||
|
||
marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1) | ||
|
||
self.marker_visualization_radar_publisher.publish(marker_array) | ||
|
||
cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array, | ||
bounding_boxes) | ||
self.cluster_info_radar_publisher.publish(cluster_info) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for ROS message processing.
The callback method should include try-except blocks to handle potential errors during message processing, such as malformed messages or processing failures.
Apply this diff to add error handling:
- dataarray = pointcloud2_to_array(data)
-
- # radar position z=0.7
- dataarray = filter_data(dataarray, min_z=-0.6)
-
- clustered_data = cluster_data(dataarray)
+ try:
+ dataarray = pointcloud2_to_array(data)
+
+ # radar position z=0.7
+ dataarray = filter_data(dataarray, min_z=-0.6)
+
+ clustered_data = cluster_data(dataarray)
+
+ cloud = create_pointcloud2(dataarray, clustered_data.labels_)
+ self.visualization_radar_publisher.publish(cloud)
+
+ points_with_labels = np.hstack((dataarray,
+ clustered_data.labels_.reshape(-1, 1)))
+ bounding_boxes = generate_bounding_boxes(points_with_labels)
+
+ marker_array = MarkerArray()
+ for label, bbox in bounding_boxes:
+ if label != -1:
+ marker = create_bounding_box_marker(label, bbox)
+ marker_array.markers.append(marker)
+
+ marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1)
+ self.marker_visualization_radar_publisher.publish(marker_array)
+
+ cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array,
+ bounding_boxes)
+ self.cluster_info_radar_publisher.publish(cluster_info)
+ except Exception as e:
+ rospy.logerr(f"Error processing radar data: {str(e)}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dataarray = pointcloud2_to_array(data) | |
# radar position z=0.7 | |
dataarray = filter_data(dataarray, min_z=-0.6) | |
clustered_data = cluster_data(dataarray) | |
# transformed_data = transform_data_to_2d(dataarray) | |
cloud = create_pointcloud2(dataarray, clustered_data.labels_) | |
self.visualization_radar_publisher.publish(cloud) | |
points_with_labels = np.hstack((dataarray, | |
clustered_data.labels_.reshape(-1, 1))) | |
bounding_boxes = generate_bounding_boxes(points_with_labels) | |
marker_array = MarkerArray() | |
for label, bbox in bounding_boxes: | |
if label != -1: | |
marker = create_bounding_box_marker(label, bbox) | |
marker_array.markers.append(marker) | |
# can be used for extra debugging | |
# min_marker, max_marker = create_min_max_markers(label, bbox) | |
# marker_array.markers.append(min_marker) | |
# marker_array.markers.append(max_marker) | |
marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1) | |
self.marker_visualization_radar_publisher.publish(marker_array) | |
cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array, | |
bounding_boxes) | |
self.cluster_info_radar_publisher.publish(cluster_info) | |
try: | |
dataarray = pointcloud2_to_array(data) | |
# radar position z=0.7 | |
dataarray = filter_data(dataarray, min_z=-0.6) | |
clustered_data = cluster_data(dataarray) | |
cloud = create_pointcloud2(dataarray, clustered_data.labels_) | |
self.visualization_radar_publisher.publish(cloud) | |
points_with_labels = np.hstack((dataarray, | |
clustered_data.labels_.reshape(-1, 1))) | |
bounding_boxes = generate_bounding_boxes(points_with_labels) | |
marker_array = MarkerArray() | |
for label, bbox in bounding_boxes: | |
if label != -1: | |
marker = create_bounding_box_marker(label, bbox) | |
marker_array.markers.append(marker) | |
marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1) | |
self.marker_visualization_radar_publisher.publish(marker_array) | |
cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array, | |
bounding_boxes) | |
self.cluster_info_radar_publisher.publish(cluster_info) | |
except Exception as e: | |
rospy.logerr(f"Error processing radar data: {str(e)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (4)
code/perception/src/radar_node.py (4)
88-88
: Fix parameter name typo.The parameter name
clusterInfo_topic_topic
appears to have a duplicate "topic" suffix.- rospy.get_param("~clusterInfo_topic_topic", "/paf/hero/Radar/ClusterInfo"), + rospy.get_param("~cluster_info_topic", "/paf/hero/Radar/ClusterInfo"),
166-180
: Fix inconsistent docstring in cluster_data function.The docstring's return type description doesn't match the actual return type (DBSCAN object).
""" Clusters the radar data using the DBSCAN algorithm Args: data (np.ndarray): data array which should be clustered eps (float, optional): maximum distance of points. Defaults to 0.8. min_samples (int, optional): min samples for 1 cluster. Defaults to 3. Returns: - dict: A dictionary where the keys are cluster labels (int) and the values - are the number of points in each cluster. Returns an empty dictionary - if no points are available. - DBSCAN: A DBSCAN clustering object containing labels and core sample indices + DBSCAN: A DBSCAN clustering object containing labels and core sample indices. + Returns None if no points are available. """
198-207
: Complete the docstring in create_pointcloud2 function.The docstring is incomplete with placeholders.
- """_summary_ + """ + Creates a PointCloud2 message for visualization with colored points based on cluster labels. Args: - clustered_points (dict): clustered points after dbscan - cluster_labels (_type_): _description_ + clustered_points (np.ndarray): Array of points with format [x, y, z, velocity] + cluster_labels (np.ndarray): Array of cluster labels for each point Returns: PointCloud2: pointcloud which can be published """
31-65
: Consider memory optimization for large point clouds.The current implementation keeps multiple copies of the point cloud data in memory (original, filtered, with labels). For large point clouds, this could lead to high memory usage.
Consider:
- Processing points in batches for very large point clouds
- Using in-place operations where possible
- Clearing intermediate arrays when no longer needed
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
code/perception/src/radar_node.py
(4 hunks)
🔇 Additional comments (2)
code/perception/src/radar_node.py (2)
216-224
:
Fix potential IndexError in color assignment.
The color assignment colors[label]
could raise an IndexError if the label is larger than the number of generated colors.
unique_labels = np.unique(cluster_labels)
colors = generate_color_map(len(unique_labels))
+ label_to_color = {label: colors[i] for i, label in enumerate(unique_labels) if label != -1}
for i, point in enumerate(clustered_points):
x, y, z, v = point
label = cluster_labels[i]
if label == -1:
r, g, b = 128, 128, 128
else:
- r, g, b = colors[label]
+ r, g, b = label_to_color.get(label, (255, 255, 255)) # Default to white if label not found
Likely invalid or redundant comment.
31-65
: 🛠️ Refactor suggestion
Add error handling to the callback method.
The callback method processes complex data transformations and publishes to multiple topics, but lacks error handling. This could lead to node crashes if any step fails.
Apply this diff to add error handling:
- dataarray = pointcloud2_to_array(data)
- dataarray = filter_data(dataarray, min_z=-0.6)
- clustered_data = cluster_data(dataarray)
- cloud = create_pointcloud2(dataarray, clustered_data.labels_)
- self.visualization_radar_publisher.publish(cloud)
+ try:
+ dataarray = pointcloud2_to_array(data)
+ dataarray = filter_data(dataarray, min_z=-0.6)
+
+ if len(dataarray) == 0:
+ rospy.logwarn("No valid radar points after filtering")
+ return
+
+ clustered_data = cluster_data(dataarray)
+ cloud = create_pointcloud2(dataarray, clustered_data.labels_)
+ self.visualization_radar_publisher.publish(cloud)
+
+ points_with_labels = np.hstack((dataarray, clustered_data.labels_.reshape(-1, 1)))
+ bounding_boxes = generate_bounding_boxes(points_with_labels)
+
+ marker_array = MarkerArray()
+ for label, bbox in bounding_boxes:
+ if label != -1:
+ marker = create_bounding_box_marker(label, bbox)
+ marker_array.markers.append(marker)
+
+ marker_array = clear_old_markers(marker_array, max_id=len(bounding_boxes) - 1)
+ self.marker_visualization_radar_publisher.publish(marker_array)
+
+ cluster_info = generate_cluster_info(clustered_data, dataarray, marker_array, bounding_boxes)
+ self.cluster_info_radar_publisher.publish(cluster_info)
+ except Exception as e:
+ rospy.logerr(f"Error processing radar data: {str(e)}")
Likely invalid or redundant comment.
Description
The clustered radar data can be visualized with different colored pointclouds, 2d bounding boxes or 3d bounding boxes
Fixes # (540)
Type of change
Please delete options that are not relevant.
Does this PR introduce a breaking change?
No
Most important changes
Which files functionalities are most important in this PR. On which part should the reviewer be focussed on?
radar_node.py
Checklist:
Summary by CodeRabbit
New Features
Bug Fixes
Chores
pyopenssl
to the project requirements.