## Table of contents
- * [ROS1 and ROS2 legacy](#legacy)
+ * [ROS1 and ROS2 legacy](#ros1-and-ros2-legacy)
* [Installation](#installation)
* [Usage](#usage)
- * [Starting the camera node](#start-camera-node)
- * [Camera name and namespace](#camera-name-and-namespace)
+ * [Starting the camera node](#start-the-camera-node)
+ * [Camera name and namespace](#camera-name-and-camera-namespace)
* [Parameters](#parameters)
- * [ROS2-vs-Optical Coordination Systems](#coordination)
- * [TF from coordinate A to coordinate B](#tfs)
- * [Extrinsics from sensor A to sensor B](#extrinsics)
- * [Topics](#topics)
- * [RGBD Topic](#rgbd)
- * [Metadata Topic](#metadata)
- * [Post-Processing Filters](#filters)
- * [Available Services](#services)
- * [Efficient intra-process communication](#intra-process)
+ * [ROS2-vs-Optical Coordination Systems](#ros2robot-vs-opticalcamera-coordination-systems)
+ * [TF from coordinate A to coordinate B](#tf-from-coordinate-a-to-coordinate-b)
+ * [Extrinsics from sensor A to sensor B](#extrinsics-from-sensor-a-to-sensor-b)
+ * [Topics](#published-topics)
+ * [RGBD Topic](#rgbd-topic)
+ * [Metadata Topic](#metadata-topic)
+ * [Post-Processing Filters](#post-processing-filters)
+ * [Available Services](#available-services)
+ * [Efficient intra-process communication](#efficient-intra-process-communication)
* [Contributing](CONTRIBUTING.md)
* [License](LICENSE)
-
- Legacy
-
+# ROS1 and ROS2 Legacy
@@ -80,9 +78,7 @@
-
- Installation
-
+# Installation
@@ -143,8 +139,8 @@
- Install dependencies
```bash
sudo apt-get install python3-rosdep -y
- sudo rosdep init # "sudo rosdep init --include-eol-distros" for Eloquent and earlier
- rosdep update # "sudo rosdep update --include-eol-distros" for Eloquent and earlier
+ sudo rosdep init # "sudo rosdep init --include-eol-distros" for Foxy and earlier
+ rosdep update # "sudo rosdep update --include-eol-distros" for Foxy and earlier
rosdep install -i --from-path src --rosdistro $ROS_DISTRO --skip-keys=librealsense2 -y
```
@@ -165,13 +161,9 @@
-
- Usage
-
+# Usage
-
- Start the camera node
-
+## Start the camera node
#### with ros2 run:
ros2 run realsense2_camera realsense2_camera_node
@@ -184,11 +176,8 @@
-
- Camera Name And Camera Namespace
-
+## Camera Name And Camera Namespace
-### Usage
User can set the camera name and camera namespace, to distinguish between cameras and platforms, which helps identifying the right nodes and topics to work with.
### Example
@@ -203,7 +192,7 @@ User can set the camera name and camera namespace, to distinguish between camera
```ros2 launch realsense2_camera rs_launch.py camera_namespace:=robot1 camera_name:=D455_1```
- - With ros2 run (using remapping mechanisim [Reference](https://docs.ros.org/en/foxy/How-To-Guides/Node-arguments.html)):
+ - With ros2 run (using remapping mechanisim [Reference](https://docs.ros.org/en/humble/How-To-Guides/Node-arguments.html)):
```ros2 run realsense2_camera realsense2_camera_node --ros-args -r __node:=D455_1 -r __ns:=robot1```
@@ -248,13 +237,9 @@ User can set the camera name and camera namespace, to distinguish between camera
/camera/camera/device_info
```
-
-
-
- Parameters
-
+## Parameters
### Available Parameters:
- For the entire list of parameters type `ros2 param list`.
@@ -269,13 +254,13 @@ User can set the camera name and camera namespace, to distinguish between camera
- They have, at least, the **profile** parameter.
- The profile parameter is a string of the following format: \X\X\ (The dividing character can be X, x or ",". Spaces are ignored.)
- For example: ```depth_module.profile:=640x480x30 rgb_camera.profile:=1280x720x30```
- - Since infra, infra1, infra2, fisheye, fisheye1, fisheye2 and depth are all streams of the depth_module, their width, height and fps are defined by the same param **depth_module.profile**
+ - Since infra, infra1, infra2 and depth are all streams of the depth_module, their width, height and fps are defined by the same param **depth_module.profile**
- If the specified combination of parameters is not available by the device, the default or previously set configuration will be used.
- Run ```ros2 param describe ``` to get the list of supported profiles.
- Note: Should re-enable the stream for the change to take effect.
- ****_format**
- This parameter is a string used to select the stream format.
- - can be any of *infra, infra1, infra2, color, depth, fisheye, fisheye1, fisheye2*.
+ - can be any of *infra, infra1, infra2, color, depth*.
- For example: ```depth_module.depth_format:=Z16 depth_module.infra1_format:=y8 rgb_camera.color_format:=RGB8```
- This parameter supports both lower case and upper case letters.
- If the specified parameter is not available by the stream, the default or previously set configuration will be used.
@@ -286,14 +271,14 @@ User can set the camera name and camera namespace, to distinguish between camera
- Run ```rs-enumerate-devices``` command to know the list of profiles supported by the connected sensors.
- **enable_****:
- Choose whether to enable a specified stream or not. Default is true for images and false for orientation streams.
- - can be any of *infra, infra1, infra2, color, depth, fisheye, fisheye1, fisheye2, gyro, accel, pose*.
+ - can be any of *infra, infra1, infra2, color, depth, gyro, accel*.
- For example: ```enable_infra1:=true enable_color:=false```
- **enable_sync**:
- gathers closest frames of different sensors, infra red, color and depth, to be sent with the same timetag.
- This happens automatically when such filters as pointcloud are enabled.
- ****_qos**:
- Sets the QoS by which the topic is published.
- - can be any of *infra, infra1, infra2, color, depth, fisheye, fisheye1, fisheye2, gyro, accel, pose*.
+ - can be any of *infra, infra1, infra2, color, depth, gyro, accel*.
- Available values are the following strings: `SYSTEM_DEFAULT`, `DEFAULT`, `PARAMETER_EVENTS`, `SERVICES_DEFAULT`, `PARAMETERS`, `SENSOR_DATA`.
- For example: ```depth_qos:=SENSOR_DATA```
- Reference: [ROS2 QoS profiles formal documentation](https://docs.ros.org/en/rolling/Concepts/About-Quality-of-Service-Settings.html#qos-profiles)
@@ -354,7 +339,6 @@ User can set the camera name and camera namespace, to distinguish between camera
- If set to true, the device will reset prior to usage.
- For example: `initial_reset:=true`
- **base_frame_id**: defines the frame_id all static transformations refers to.
-- **odom_frame_id**: defines the origin coordinate system in ROS convention (X-Forward, Y-Left, Z-Up). pose topic defines the pose relative to that system.
- **clip_distance**:
- Remove from the depth image all values above a given value (meters). Disable by giving negative value (default)
- For example: `clip_distance:=1.5`
@@ -371,13 +355,9 @@ User can set the camera name and camera namespace, to distinguish between camera
- 0 or negative values mean no diagnostics topic is published. Defaults to 0.
The `/diagnostics` topic includes information regarding the device temperatures and actual frequency of the enabled streams.
-- **publish_odom_tf**: If True (default) publish TF from odom_frame to pose_frame.
-
-
- ROS2(Robot) vs Optical(Camera) Coordination Systems:
-
+## ROS2(Robot) vs Optical(Camera) Coordination Systems:
- Point Of View:
- Imagine we are standing behind of the camera, and looking forward.
@@ -393,9 +373,7 @@ The `/diagnostics` topic includes information regarding the device temperatures
-
- TF from coordinate A to coordinate B:
-
+## TF from coordinate A to coordinate B:
- TF msg expresses a transform from coordinate frame "header.frame_id" (source) to the coordinate frame child_frame_id (destination) [Reference](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Transform.html)
- In RealSense cameras, the origin point (0,0,0) is taken from the left IR (infra1) position and named as "camera_link" frame
@@ -407,10 +385,7 @@ The `/diagnostics` topic includes information regarding the device temperatures
-
- Extrinsics from sensor A to sensor B:
-
-
+## Extrinsics from sensor A to sensor B:
- Extrinsic from sensor A to sensor B means the position and orientation of sensor A relative to sensor B.
- Imagine that B is the origin (0,0,0), then the Extrensics(A->B) describes where is sensor A relative to sensor B.
@@ -445,9 +420,7 @@ translation:
-
- Published Topics
-
+## Published Topics
The published topics differ according to the device and parameters.
After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type `ros2 topic list`):
@@ -489,9 +462,7 @@ Enabling stream adds matching topics. For instance, enabling the gyro and accel
-
- RGBD Topic
-
+## RGBD Topic
RGBD new topic, publishing [RGB + Depth] in the same message (see RGBD.msg for reference). For now, works only with depth aligned to color images, as color and depth images are synchronized by frame time tag.
@@ -511,9 +482,7 @@ ros2 launch realsense2_camera rs_launch.py enable_rgbd:=true enable_sync:=true a
-
- Metadata topic
-
+## Metadata topic
The metadata messages store the camera's available metadata in a *json* format. To learn more, a dedicated script for echoing a metadata topic in runtime is attached. For instance, use the following command to echo the camera/depth/metadata topic:
```
@@ -522,10 +491,8 @@ python3 src/realsense-ros/realsense2_camera/scripts/echo_metadada.py /camera/cam
-
- Post-Processing Filters
-
-
+## Post-Processing Filters
+
The following post processing filters are available:
- ```align_depth```: If enabled, will publish the depth image aligned to the color image on the topic `/camera/camera/aligned_depth_to_color/image_raw`.
- The pointcloud, if created, will be based on the aligned depth image.
@@ -555,17 +522,13 @@ Each of the above filters have it's own parameters, following the naming convent
-
- Available services
-
+## Available services
- device_info : retrieve information about the device - serial_number, firmware_version etc. Type `ros2 interface show realsense2_camera_msgs/srv/DeviceInfo` for the full list. Call example: `ros2 service call /camera/camera/device_info realsense2_camera_msgs/srv/DeviceInfo`
-
- Efficient intra-process communication:
-
+## Efficient intra-process communication:
Our ROS2 Wrapper node supports zero-copy communications if loaded in the same process as a subscriber node. This can reduce copy times on image/pointcloud topics, especially with big frame resolutions and high FPS.
diff --git a/realsense2_camera/CMakeLists.txt b/realsense2_camera/CMakeLists.txt
index 0c550a8994..eead7eaf39 100644
--- a/realsense2_camera/CMakeLists.txt
+++ b/realsense2_camera/CMakeLists.txt
@@ -144,34 +144,18 @@ set(SOURCES
if(NOT DEFINED ENV{ROS_DISTRO})
message(FATAL_ERROR "ROS_DISTRO is not defined." )
endif()
-if("$ENV{ROS_DISTRO}" STREQUAL "dashing")
- message(STATUS "Build for ROS2 Dashing")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DDASHING")
- set(SOURCES "${SOURCES}" src/ros_param_backend_dashing.cpp)
-elseif("$ENV{ROS_DISTRO}" STREQUAL "eloquent")
- message(STATUS "Build for ROS2 eloquent")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DELOQUENT")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
-elseif("$ENV{ROS_DISTRO}" STREQUAL "foxy")
- message(STATUS "Build for ROS2 Foxy")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DFOXY")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
-elseif("$ENV{ROS_DISTRO}" STREQUAL "galactic")
- message(STATUS "Build for ROS2 Galactic")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DGALACTIC")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
-elseif("$ENV{ROS_DISTRO}" STREQUAL "humble")
+if("$ENV{ROS_DISTRO}" STREQUAL "humble")
message(STATUS "Build for ROS2 Humble")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DHUMBLE")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
+ set(SOURCES "${SOURCES}" src/ros_param_backend.cpp)
elseif("$ENV{ROS_DISTRO}" STREQUAL "iron")
message(STATUS "Build for ROS2 Iron")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DIRON")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
+ set(SOURCES "${SOURCES}" src/ros_param_backend.cpp)
elseif("$ENV{ROS_DISTRO}" STREQUAL "rolling")
message(STATUS "Build for ROS2 Rolling")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DROLLING")
- set(SOURCES "${SOURCES}" src/ros_param_backend_rolling.cpp)
+ set(SOURCES "${SOURCES}" src/ros_param_backend.cpp)
else()
message(FATAL_ERROR "Unsupported ROS Distribution: " "$ENV{ROS_DISTRO}")
endif()
@@ -320,6 +304,36 @@ if(BUILD_TESTING)
)
endforeach()
endforeach()
+
+ unset(_pytest_folders)
+
+ set(rs_query_cmd "rs-enumerate-devices -s")
+ execute_process(COMMAND bash -c ${rs_query_cmd}
+ WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
+ RESULT_VARIABLE rs_result
+ OUTPUT_VARIABLE RS_DEVICE_INFO)
+ message(STATUS "rs_device_info:")
+ message(STATUS "${RS_DEVICE_INFO}")
+ if((RS_DEVICE_INFO MATCHES "D455") OR (RS_DEVICE_INFO MATCHES "D415") OR (RS_DEVICE_INFO MATCHES "D435"))
+ message(STATUS "D455 device found")
+ set(_pytest_live_folders
+ test/live_camera
+ )
+ endif()
+
+ foreach(test_folder ${_pytest_live_folders})
+ file(GLOB files "${test_folder}/test_*.py")
+ foreach(file ${files})
+
+ get_filename_component(_test_name ${file} NAME_WE)
+ ament_add_pytest_test(${_test_name} ${file}
+ APPEND_ENV PYTHONPATH=${CMAKE_CURRENT_BINARY_DIR}:${CMAKE_SOURCE_DIR}/test/utils:${CMAKE_SOURCE_DIR}/launch:${CMAKE_SOURCE_DIR}/scripts
+ TIMEOUT 500
+ WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
+ )
+ endforeach()
+ endforeach()
+
endif()
# Ament exports
diff --git a/realsense2_camera/examples/align_depth/README.md b/realsense2_camera/examples/align_depth/README.md
new file mode 100644
index 0000000000..3b8f13b826
--- /dev/null
+++ b/realsense2_camera/examples/align_depth/README.md
@@ -0,0 +1,12 @@
+# Align Depth to Color
+This example shows how to start the camera node and align depth stream to color stream.
+```
+ros2 launch realsense2_camera rs_align_depth_launch.py
+```
+
+The aligned image will be published to the topic "/aligned_depth_to_color/image_raw"
+
+Also, align depth to color can enabled by following cmd:
+```
+ros2 launch realsense2_camera rs_launch.py align_depth.enable:=true
+```
diff --git a/realsense2_camera/examples/dual_camera/README.md b/realsense2_camera/examples/dual_camera/README.md
new file mode 100644
index 0000000000..03a7cfa08f
--- /dev/null
+++ b/realsense2_camera/examples/dual_camera/README.md
@@ -0,0 +1,123 @@
+# Launching Dual RS ROS2 nodes
+The following example lanches two RS ROS2 nodes.
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:= serial_no2:=
+```
+
+## Example:
+Let's say the serial numbers of two RS cameras are 207322251310 and 234422060144.
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:="'207322251310'" serial_no2:="'234422060144'"
+```
+or
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:=_207322251310 serial_no2:=_234422060144
+```
+
+## How to know the serial number?
+Method 1: Using the rs-enumerate-devices tool
+```
+rs-enumerate-devices | grep "Serial Number"
+```
+
+Method 2: Connect single camera and run
+```
+ros2 launch realsense2_camera rs_launch.py
+```
+and look for the serial number in the log printed to screen under "[INFO][...] Device Serial No:".
+
+# Using Multiple RS camera by launching each in differnet terminals
+Make sure you set a different name and namespace for each camera.
+
+Terminal 1:
+```
+ros2 launch realsense2_camera rs_launch.py serial_no:="'207322251310'" camera_name:='camera1' camera_namespace:='camera1'
+```
+Terminal 2:
+```
+ros2 launch realsense2_camera rs_launch.py serial_no:="'234422060144'" camera_name:='camera2' camera_namespace:='camera2'
+```
+
+# Multiple cameras showing a semi-unified pointcloud
+The D430 series of RealSense cameras use stereo based algorithm to calculate depth. This mean, a couple of cameras can operate on the same scene. For the purpose of this demonstration, let's say 2 cameras can be coupled to look at the same scene from 2 different points of view. See image:
+
+![multi_cameras](https://user-images.githubusercontent.com/127019120/268692789-1b3d5d8b-a41f-4a97-995d-81d44b4bcacb.jpg)
+
+The schematic settings could be described as:
+X--------------------------------->cam_2
+| (70 cm)
+|
+|
+| (60 cm)
+|
+|
+/
+cam_1
+
+The cameras have no data regarding their relative position. Thats up to a third party program to set. To simplify things, the coordinate system of cam_1 can be considered as the refernce coordinate system for the whole scene.
+
+The estimated translation of cam_2 from cam_1 is 70(cm) on X-axis and 60(cm) on Y-axis. Also, the estimated yaw angle of cam_2 relative to cam_1 as 90(degrees) clockwise. These are the initial parameters to be set for setting the transformation between the 2 cameras as follows:
+
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:=_207322251310 serial_no2:=_234422060144 tf.translation.x:=0.7 tf.translation.y:=0.6 tf.translation.z:=0.0 tf.rotation.yaw:=-90.0 tf.rotation.pitch:=0.0 tf.rotation.roll:=0.0
+```
+
+If the unified pointcloud result is not good, follow the below steps to fine-tune the calibaration.
+
+## Visualizing the pointclouds and fine-tune the camera calibration
+Launch 2 cameras in separate terminals:
+
+**Terminal 1:**
+```
+ros2 launch realsense2_camera rs_launch.py serial_no:="'207322251310'" camera_name:='camera1' camera_namespace:='camera1'
+```
+**Terminal 2:**
+```
+ros2 launch realsense2_camera rs_launch.py serial_no:="'234422060144'" camera_name:='camera2' camera_namespace:='camera2'
+```
+**Terminal 3:**
+```
+rviz2
+```
+Open rviz and set 'Fixed Frame' to camera1_link
+Add Pointcloud2-> By topic -> /camera1/camera1/depth/color/points
+Add Pointcloud2 -> By topic -> /camera2/camera2/depth/color/points
+
+**Terminal 4:**
+Run the 'set_cams_transforms.py' tool. It can be used to fine-tune the calibaration.
+```
+python src/realsense-ros/realsense2_camera/scripts/set_cams_transforms.py camera1_link camera2_link 0.7 0.6 0 -90 0 0
+```
+
+**Instructions printed by the tool:**
+```
+Using default file /home/user_name/ros2_ws/src/realsense-ros/realsense2_camera/scripts/_set_cams_info_file.txt
+
+Use given initial values.
+
+Press the following keys to change mode: x, y, z, (a)zimuth, (p)itch, (r)oll
+
+For each mode, press 6 to increase by step and 4 to decrease
+
+Press + to multiply step by 2 or - to divide
+
+Press Q to quit
+```
+
+Note that the tool prints the path of the current configuration file. It saves its last configuration automatically, all the time, to be used on the next run.
+
+After a lot of fiddling around, unified pointcloud looked better with the following calibaration:
+```
+x = 0.75
+y = 0.575
+z = 0
+azimuth = -91.25
+pitch = 0.75
+roll = 0
+```
+
+Now, use the above results in the launch file:
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:=_207322251310 serial_no2:=_234422060144 tf.translation.x:=0.75 tf.translation.y:=0.575 tf.translation.z:=0.0 tf.rotation.yaw:=-91.25 tf.rotation.pitch:=0.75 tf.rotation.roll:=0.0
+```
+
diff --git a/realsense2_camera/examples/dual_camera/rs_dual_camera_launch.py b/realsense2_camera/examples/dual_camera/rs_dual_camera_launch.py
index 4a4db4a275..81643d2c7e 100644
--- a/realsense2_camera/examples/dual_camera/rs_dual_camera_launch.py
+++ b/realsense2_camera/examples/dual_camera/rs_dual_camera_launch.py
@@ -19,7 +19,7 @@
# For each device, the parameter name was changed to include an index.
# For example: to set camera_name for device1 set parameter camera_name1.
# command line example:
-# ros2 launch realsense2_camera rs_dual_camera_launch.py camera_name1:=D400 device_type2:=l5. device_type1:=d4..
+# ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:= serial_no2:=
"""Launch realsense2_camera node."""
import copy
@@ -68,7 +68,7 @@ def duplicate_params(general_params, posix):
return local_params
def launch_static_transform_publisher_node(context : LaunchContext):
- # dummy static transformation from camera1 to camera2
+ # Static transformation from camera1 to camera2
node = launch_ros.actions.Node(
name = "my_static_transform_publisher",
package = "tf2_ros",
@@ -104,6 +104,6 @@ def generate_launch_description():
namespace='',
executable='rviz2',
name='rviz2',
- arguments=['-d', [ThisLaunchFileDir(), '/dual_camera_pointcloud.rviz']]
+ arguments=['-d', [ThisLaunchFileDir(), '/rviz/dual_camera_pointcloud.rviz']]
)
])
diff --git a/realsense2_camera/examples/dual_camera/dual_camera_pointcloud.rviz b/realsense2_camera/examples/dual_camera/rviz/dual_camera_pointcloud.rviz
similarity index 98%
rename from realsense2_camera/examples/dual_camera/dual_camera_pointcloud.rviz
rename to realsense2_camera/examples/dual_camera/rviz/dual_camera_pointcloud.rviz
index 3469eda9d8..1a4c0caa11 100644
--- a/realsense2_camera/examples/dual_camera/dual_camera_pointcloud.rviz
+++ b/realsense2_camera/examples/dual_camera/rviz/dual_camera_pointcloud.rviz
@@ -74,7 +74,7 @@ Visualization Manager:
Filter size: 10
History Policy: Keep Last
Reliability Policy: Reliable
- Value: /camera1/depth/color/points
+ Value: /camera1/camera1/depth/color/points
Use Fixed Frame: true
Use rainbow: true
Value: true
@@ -108,7 +108,7 @@ Visualization Manager:
Filter size: 10
History Policy: Keep Last
Reliability Policy: Reliable
- Value: /camera2/depth/color/points
+ Value: /camera2/camera2/depth/color/points
Use Fixed Frame: true
Use rainbow: true
Value: true
diff --git a/realsense2_camera/examples/launch_from_rosbag/README.md b/realsense2_camera/examples/launch_from_rosbag/README.md
new file mode 100644
index 0000000000..5bfbb017f5
--- /dev/null
+++ b/realsense2_camera/examples/launch_from_rosbag/README.md
@@ -0,0 +1,17 @@
+# Launching RS ROS2 node from rosbag File
+The following example allows streaming a rosbag file, saved by Intel RealSense Viewer, instead of streaming live with a camera. It can be used for testing and repetition of the same sequence.
+```
+ros2 launch realsense2_camera rs_launch_from_rosbag.py
+```
+By default, the 'rs_launch_from_rosbag.py' launch file uses the "/rosbag/D435i_Depth_and_IMU_Stands_still.bag" rosbag file.
+
+User can also provide a different rosbag file through cmd line as follows:
+```
+ros2 launch realsense2_camera rs_launch_from_rosbag.py rosbag_filename:="/full/path/to/rosbag/file"
+```
+or
+```
+ros2 launch realsense2_camera rs_launch.py rosbag_filename:="/full/path/to/rosbag/file"
+```
+
+Check-out [sample-recordings](https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md) for a few recorded samples.
\ No newline at end of file
diff --git a/realsense2_camera/examples/launch_from_rosbag/rs_launch_from_rosbag.py b/realsense2_camera/examples/launch_from_rosbag/rs_launch_from_rosbag.py
index ad9b76667b..a0d9620cad 100644
--- a/realsense2_camera/examples/launch_from_rosbag/rs_launch_from_rosbag.py
+++ b/realsense2_camera/examples/launch_from_rosbag/rs_launch_from_rosbag.py
@@ -37,7 +37,7 @@
{'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
{'name': 'enable_gyro', 'default': 'true', 'description': "'enable gyro stream'"},
{'name': 'enable_accel', 'default': 'true', 'description': "'enable accel stream'"},
- {'name': 'rosbag_filename', 'default': [ThisLaunchFileDir(), "/D435i_Depth_and_IMU_Stands_still.bag"], 'description': 'A realsense bagfile to run from as a device'},
+ {'name': 'rosbag_filename', 'default': [ThisLaunchFileDir(), "/rosbag/D435i_Depth_and_IMU_Stands_still.bag"], 'description': 'A realsense bagfile to run from as a device'},
]
def set_configurable_parameters(local_params):
diff --git a/realsense2_camera/examples/launch_params_from_file/README.md b/realsense2_camera/examples/launch_params_from_file/README.md
new file mode 100644
index 0000000000..ce9980aa5b
--- /dev/null
+++ b/realsense2_camera/examples/launch_params_from_file/README.md
@@ -0,0 +1,33 @@
+# Get RS ROS2 node params from YAML file
+The following example gets the RS ROS2 node params from YAML file.
+```
+ros2 launch realsense2_camera rs_launch_get_params_from_yaml.py
+```
+
+By default, 'rs_launch_get_params_from_yaml.py' launch file uses the "/config/config.yaml" YAML file.
+
+User can provide a different YAML file through cmd line as follows:
+```
+ros2 launch realsense2_camera rs_launch_get_params_from_yaml.py config_file:="/full/path/to/config/file"
+```
+or
+```
+ros2 launch realsense2_camera rs_launch.py config_file:="/full/path/to/config/file"
+```
+
+## Syntax for defining params in YAML file
+```
+param1: value
+param2: value
+```
+
+Example:
+```
+enable_color: true
+rgb_camera.profile: 1280x720x15
+enable_depth: true
+align_depth.enable: true
+enable_sync: true
+publish_tf: true
+tf_publish_rate: 1.0
+```
\ No newline at end of file
diff --git a/realsense2_camera/examples/launch_params_from_file/config.yaml b/realsense2_camera/examples/launch_params_from_file/config/config.yaml
similarity index 74%
rename from realsense2_camera/examples/launch_params_from_file/config.yaml
rename to realsense2_camera/examples/launch_params_from_file/config/config.yaml
index 8b8bcc1709..d15a19a0fe 100644
--- a/realsense2_camera/examples/launch_params_from_file/config.yaml
+++ b/realsense2_camera/examples/launch_params_from_file/config/config.yaml
@@ -3,4 +3,5 @@ rgb_camera.profile: 1280x720x15
enable_depth: true
align_depth.enable: true
enable_sync: true
-
+publish_tf: true
+tf_publish_rate: 1.0
diff --git a/realsense2_camera/examples/launch_params_from_file/rs_launch_get_params_from_yaml.py b/realsense2_camera/examples/launch_params_from_file/rs_launch_get_params_from_yaml.py
index a6b5582039..c0ebd7bfbd 100644
--- a/realsense2_camera/examples/launch_params_from_file/rs_launch_get_params_from_yaml.py
+++ b/realsense2_camera/examples/launch_params_from_file/rs_launch_get_params_from_yaml.py
@@ -34,7 +34,7 @@
local_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
{'name': 'camera_namespace', 'default': 'camera', 'description': 'camera namespace'},
- {'name': 'config_file', 'default': [ThisLaunchFileDir(), "/config.yaml"], 'description': 'yaml config file'},
+ {'name': 'config_file', 'default': [ThisLaunchFileDir(), "/config/config.yaml"], 'description': 'yaml config file'},
]
def set_configurable_parameters(local_params):
diff --git a/realsense2_camera/examples/pointcloud/README.md b/realsense2_camera/examples/pointcloud/README.md
new file mode 100644
index 0000000000..1285e0a843
--- /dev/null
+++ b/realsense2_camera/examples/pointcloud/README.md
@@ -0,0 +1,17 @@
+# PointCloud Visualization
+The following example starts the camera and simultaneously opens RViz GUI to visualize the published pointcloud.
+```
+ros2 launch realsense2_camera rs_pointcloud_launch.py
+```
+
+Alternatively, start the camera terminal 1:
+```
+ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true
+```
+and in terminal 2 open rviz to visualize pointcloud.
+
+# PointCloud with different coordinate systems
+This example opens rviz and shows the camera model with different coordinate systems and the pointcloud, so it presents the pointcloud and the camera together.
+```
+ros2 launch realsense2_camera rs_d455_pointcloud_launch.py
+```
\ No newline at end of file
diff --git a/realsense2_camera/examples/pointcloud/rs_d455_pointcloud_launch.py b/realsense2_camera/examples/pointcloud/rs_d455_pointcloud_launch.py
index 4f013b1c87..85ee0ff48d 100644
--- a/realsense2_camera/examples/pointcloud/rs_d455_pointcloud_launch.py
+++ b/realsense2_camera/examples/pointcloud/rs_d455_pointcloud_launch.py
@@ -77,7 +77,7 @@ def generate_launch_description():
namespace='',
executable='rviz2',
name='rviz2',
- arguments=['-d', [ThisLaunchFileDir(), '/urdf_pointcloud.rviz']],
+ arguments=['-d', [ThisLaunchFileDir(), '/rviz/urdf_pointcloud.rviz']],
output='screen',
parameters=[{'use_sim_time': False}]
),
diff --git a/realsense2_camera/examples/pointcloud/rs_pointcloud_launch.py b/realsense2_camera/examples/pointcloud/rs_pointcloud_launch.py
index f0a5b541e0..f0acfe80b2 100644
--- a/realsense2_camera/examples/pointcloud/rs_pointcloud_launch.py
+++ b/realsense2_camera/examples/pointcloud/rs_pointcloud_launch.py
@@ -57,6 +57,6 @@ def generate_launch_description():
namespace='',
executable='rviz2',
name='rviz2',
- arguments=['-d', [ThisLaunchFileDir(), '/pointcloud.rviz']]
+ arguments=['-d', [ThisLaunchFileDir(), '/rviz/pointcloud.rviz']]
)
])
diff --git a/realsense2_camera/examples/pointcloud/pointcloud.rviz b/realsense2_camera/examples/pointcloud/rviz/pointcloud.rviz
similarity index 69%
rename from realsense2_camera/examples/pointcloud/pointcloud.rviz
rename to realsense2_camera/examples/pointcloud/rviz/pointcloud.rviz
index 055431f228..bb8955146f 100644
--- a/realsense2_camera/examples/pointcloud/pointcloud.rviz
+++ b/realsense2_camera/examples/pointcloud/rviz/pointcloud.rviz
@@ -8,14 +8,12 @@ Panels:
- /Status1
- /Grid1
- /PointCloud21
- - /Image1
Splitter Ratio: 0.5
- Tree Height: 222
+ Tree Height: 382
- Class: rviz_common/Selection
Name: Selection
- Class: rviz_common/Tool Properties
Expanded:
- - /2D Nav Goal1
- /Publish Point1
Name: Tool Properties
Splitter Ratio: 0.5886790156364441
@@ -65,13 +63,17 @@ Visualization Manager:
Min Intensity: 0
Name: PointCloud2
Position Transformer: XYZ
- Queue Size: 10
Selectable: true
Size (Pixels): 3
Size (m): 0.009999999776482582
Style: Flat Squares
- Topic: /camera/depth/color/points
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ Filter size: 10
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/color/points
Use Fixed Frame: true
Use rainbow: true
Value: true
@@ -82,9 +84,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/color/image_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/color/image_raw
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -93,31 +98,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/depth/image_rect_raw
- Unreliable: false
- Value: true
- - Class: rviz_default_plugins/Image
- Enabled: true
- Max Value: 1
- Median window: 5
- Min Value: 0
- Name: Image
- Normalize Range: true
- Queue Size: 10
- Topic: /camera/infra1/image_rect_raw
- Unreliable: false
- Value: true
- - Class: rviz_default_plugins/Image
- Enabled: true
- Max Value: 1
- Median window: 5
- Min Value: 0
- Name: Image
- Normalize Range: true
- Queue Size: 10
- Topic: /camera/infra2/image_rect_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/image_rect_raw
Value: true
Enabled: true
Global Options:
@@ -132,12 +118,30 @@ Visualization Manager:
- Class: rviz_default_plugins/Measure
Line color: 128; 128; 0
- Class: rviz_default_plugins/SetInitialPose
- Topic: /initialpose
+ Covariance x: 0.25
+ Covariance y: 0.25
+ Covariance yaw: 0.06853891909122467
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /initialpose
- Class: rviz_default_plugins/SetGoal
- Topic: /move_base_simple/goal
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /move_base_simple/goal
- Class: rviz_default_plugins/PublishPoint
Single click: true
- Topic: /clicked_point
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /clicked_point
Transformation:
Current:
Class: rviz_default_plugins/TF
@@ -168,18 +172,18 @@ Visualization Manager:
Window Geometry:
Displays:
collapsed: false
- Height: 1025
+ Height: 1016
Hide Left Dock: false
Hide Right Dock: true
Image:
collapsed: false
- QMainWindow State: 000000ff00000000fd000000040000000000000156000003a7fc0200000010fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d0000011b000000c900fffffffb0000000a0049006d006100670065010000015e0000009b0000002800fffffffb0000000a0049006d00610067006501000001ff000000b20000002800fffffffb0000000a0049006d00610067006501000002b70000009a0000002800fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d00610067006501000003570000008d0000002800fffffffb0000000a0049006d00610067006501000001940000005d0000000000000000fb0000000a0049006d00610067006501000001f70000007a0000000000000000fb0000000a0049006d00610067006501000002770000009d0000000000000000fb0000000a0049006d006100670065010000031a000000ca0000000000000000000000010000010f000001effc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000001ef000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b20000000000000000000000020000073d000000a9fc0100000002fb0000000a0049006d00610067006503000001c5000000bb000001f8000001b0fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000005e1000003a700000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ QMainWindow State: 000000ff00000000fd0000000400000000000001560000039efc0200000010fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d000001bb000000c900fffffffb0000000a0049006d006100670065010000010c0000001c0000000000000000fb0000000a0049006d006100670065010000012e0000001f0000000000000000fb0000000a0049006d00610067006501000001530000001c0000000000000000fb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d0061006700650100000175000000190000000000000000fb0000000a0049006d00610067006501000001fe000000cd0000002800fffffffb0000000a0049006d00610067006501000002d10000010a0000002800fffffffb0000000a0049006d0061006700650000000233000000b70000000000000000fb0000000a0049006d00610067006501000002b2000001290000000000000000000000010000010f000001effc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000001ef000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b20000000000000000000000020000073d000000a9fc0100000002fb0000000a0049006d00610067006503000001c5000000bb000001f8000001b0fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000005da0000039e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
Selection:
collapsed: false
Tool Properties:
collapsed: false
Views:
collapsed: true
- Width: 1853
- X: 67
+ Width: 1846
+ X: 74
Y: 27
diff --git a/realsense2_camera/examples/pointcloud/urdf_pointcloud.rviz b/realsense2_camera/examples/pointcloud/rviz/urdf_pointcloud.rviz
similarity index 92%
rename from realsense2_camera/examples/pointcloud/urdf_pointcloud.rviz
rename to realsense2_camera/examples/pointcloud/rviz/urdf_pointcloud.rviz
index 500e60114e..294917ad48 100644
--- a/realsense2_camera/examples/pointcloud/urdf_pointcloud.rviz
+++ b/realsense2_camera/examples/pointcloud/rviz/urdf_pointcloud.rviz
@@ -8,12 +8,12 @@ Panels:
- /Status1
- /RobotModel1
- /RobotModel1/Description Topic1
- - /TF1
- /TF1/Frames1
+ - /PointCloud21
- /Image1
- /Image2
- Splitter Ratio: 0.6360294222831726
- Tree Height: 362
+ Splitter Ratio: 0.36195287108421326
+ Tree Height: 308
- Class: rviz_common/Selection
Name: Selection
- Class: rviz_common/Tool Properties
@@ -245,7 +245,7 @@ Visualization Manager:
Filter size: 10
History Policy: Keep Last
Reliability Policy: Reliable
- Value: /camera/depth/color/points
+ Value: /camera/camera/depth/color/points
Use Fixed Frame: true
Use rainbow: true
Value: true
@@ -261,7 +261,7 @@ Visualization Manager:
Durability Policy: Volatile
History Policy: Keep Last
Reliability Policy: Reliable
- Value: /camera/color/image_raw
+ Value: /camera/camera/color/image_raw
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -275,7 +275,7 @@ Visualization Manager:
Durability Policy: Volatile
History Policy: Keep Last
Reliability Policy: Reliable
- Value: /camera/depth/image_rect_raw
+ Value: /camera/camera/depth/image_rect_raw
Value: true
Enabled: true
Global Options:
@@ -323,7 +323,7 @@ Visualization Manager:
Views:
Current:
Class: rviz_default_plugins/Orbit
- Distance: 3.9530911445617676
+ Distance: 4.639365196228027
Enable Stereo Rendering:
Stereo Eye Separation: 0.05999999865889549
Stereo Focal Distance: 1
@@ -351,7 +351,7 @@ Window Geometry:
Hide Right Dock: false
Image:
collapsed: false
- QMainWindow State: 000000ff00000000fd0000000400000000000001560000039efc020000000afb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d000001f5000000c900fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d0061006700650100000238000000b00000002800fffffffb0000000a0049006d00610067006501000002ee000000ed0000002800ffffff000000010000010f0000039efc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073010000003d0000039e000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000004c50000039e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ QMainWindow State: 000000ff00000000fd0000000400000000000002540000039efc020000000cfb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d000001bf000000c900fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d0061006700650100000132000000b00000000000000000fb0000000a0049006d00610067006501000001e8000000ed0000000000000000fb0000000a0049006d0061006700650100000202000000e90000002800fffffffb0000000a0049006d00610067006501000002f1000000ea0000002800ffffff000000010000010f0000039efc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073010000003d0000039e000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000003c70000039e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
Selection:
collapsed: false
Tool Properties:
diff --git a/realsense2_camera/include/base_realsense_node.h b/realsense2_camera/include/base_realsense_node.h
index a1cfe45037..34c5e8ebae 100755
--- a/realsense2_camera/include/base_realsense_node.h
+++ b/realsense2_camera/include/base_realsense_node.h
@@ -39,8 +39,6 @@
#include
#include
#include
-#include
-#include
#include
#include
@@ -249,7 +247,6 @@ namespace realsense2_camera
void FillImuData_LinearInterpolation(const CimuData imu_data, std::deque& imu_msgs);
void imu_callback(rs2::frame frame);
void imu_callback_sync(rs2::frame frame, imu_sync_method sync_method=imu_sync_method::COPY);
- void pose_callback(rs2::frame frame);
void multiple_message_callback(rs2::frame frame, imu_sync_method sync_method);
void frame_callback(rs2::frame frame);
@@ -294,7 +291,6 @@ namespace realsense2_camera
std::map> _image_publishers;
std::map::SharedPtr> _imu_publishers;
- std::shared_ptr> _odom_publisher;
std::shared_ptr _synced_imu_publisher;
std::map::SharedPtr> _info_publishers;
std::map::SharedPtr> _metadata_publishers;
@@ -317,7 +313,6 @@ namespace realsense2_camera
bool _is_accel_enabled;
bool _is_gyro_enabled;
bool _pointcloud;
- bool _publish_odom_tf;
imu_sync_method _imu_sync_method;
stream_index_pair _pointcloud_texture;
PipelineSyncer _syncer;
diff --git a/realsense2_camera/include/constants.h b/realsense2_camera/include/constants.h
index cd53468e0f..0f99fee585 100644
--- a/realsense2_camera/include/constants.h
+++ b/realsense2_camera/include/constants.h
@@ -31,19 +31,7 @@
#define ROS_WARN(...) RCLCPP_WARN(_logger, __VA_ARGS__)
#define ROS_ERROR(...) RCLCPP_ERROR(_logger, __VA_ARGS__)
-#ifdef DASHING
-// Based on: https://docs.ros2.org/dashing/api/rclcpp/logging_8hpp.html
-#define MSG(msg) (static_cast(std::ostringstream() << msg)).str()
-#define ROS_DEBUG_STREAM(msg) RCLCPP_DEBUG(_logger, MSG(msg))
-#define ROS_INFO_STREAM(msg) RCLCPP_INFO(_logger, MSG(msg))
-#define ROS_WARN_STREAM(msg) RCLCPP_WARN(_logger, MSG(msg))
-#define ROS_ERROR_STREAM(msg) RCLCPP_ERROR(_logger, MSG(msg))
-#define ROS_FATAL_STREAM(msg) RCLCPP_FATAL(_logger, MSG(msg))
-#define ROS_DEBUG_STREAM_ONCE(msg) RCLCPP_DEBUG_ONCE(_logger, MSG(msg))
-#define ROS_INFO_STREAM_ONCE(msg) RCLCPP_INFO_ONCE(_logger, MSG(msg))
-#define ROS_WARN_STREAM_COND(cond, msg) RCLCPP_WARN_EXPRESSION(_logger, cond, MSG(msg))
-#else
-// Based on: https://docs.ros2.org/foxy/api/rclcpp/logging_8hpp.html
+// Based on: https://docs.ros2.org/latest/api/rclcpp/logging_8hpp.html
#define ROS_DEBUG_STREAM(msg) RCLCPP_DEBUG_STREAM(_logger, msg)
#define ROS_INFO_STREAM(msg) RCLCPP_INFO_STREAM(_logger, msg)
#define ROS_WARN_STREAM(msg) RCLCPP_WARN_STREAM(_logger, msg)
@@ -52,15 +40,12 @@
#define ROS_DEBUG_STREAM_ONCE(msg) RCLCPP_DEBUG_STREAM_ONCE(_logger, msg)
#define ROS_INFO_STREAM_ONCE(msg) RCLCPP_INFO_STREAM_ONCE(_logger, msg)
#define ROS_WARN_STREAM_COND(cond, msg) RCLCPP_WARN_STREAM_EXPRESSION(_logger, cond, msg)
-#endif
#define ROS_WARN_ONCE(msg) RCLCPP_WARN_ONCE(_logger, msg)
#define ROS_WARN_COND(cond, ...) RCLCPP_WARN_EXPRESSION(_logger, cond, __VA_ARGS__)
namespace realsense2_camera
{
- const uint16_t SR300_PID = 0x0aa5; // SR300
- const uint16_t SR300v2_PID = 0x0B48; // SR305
const uint16_t RS400_PID = 0x0ad1; // PSR
const uint16_t RS410_PID = 0x0ad2; // ASR
const uint16_t RS415_PID = 0x0ad3; // ASRC
@@ -80,11 +65,7 @@ namespace realsense2_camera
const uint16_t RS430i_PID = 0x0b4b; // D430i
const uint16_t RS405_PID = 0x0B5B; // DS5U
const uint16_t RS455_PID = 0x0B5C; // D455
- const uint16_t RS457_PID = 0xABCD; // D457
- const uint16_t RS_L515_PID_PRE_PRQ = 0x0B3D; //
- const uint16_t RS_L515_PID = 0x0B64; //
- const uint16_t RS_L535_PID = 0x0b68;
-
+ const uint16_t RS457_PID = 0xABCD; // D457
const bool ALLOW_NO_TEXTURE_POINTS = false;
const bool ORDERED_PC = false;
@@ -100,15 +81,10 @@ namespace realsense2_camera
const std::string HID_QOS = "SENSOR_DATA";
const bool HOLD_BACK_IMU_FOR_FRAMES = false;
- const bool PUBLISH_ODOM_TF = true;
const std::string DEFAULT_BASE_FRAME_ID = "link";
- const std::string DEFAULT_ODOM_FRAME_ID = "odom_frame";
const std::string DEFAULT_IMU_OPTICAL_FRAME_ID = "camera_imu_optical_frame";
- const std::string DEFAULT_UNITE_IMU_METHOD = "";
- const std::string DEFAULT_FILTERS = "";
-
const float ROS_DEPTH_SCALE = 0.001;
static const rmw_qos_profile_t rmw_qos_profile_latched =
diff --git a/realsense2_camera/include/image_publisher.h b/realsense2_camera/include/image_publisher.h
index 6bc0bab8e6..3d7d004c74 100644
--- a/realsense2_camera/include/image_publisher.h
+++ b/realsense2_camera/include/image_publisher.h
@@ -17,11 +17,7 @@
#include
#include
-#if defined( DASHING ) || defined( ELOQUENT )
-#include
-#else
#include
-#endif
namespace realsense2_camera {
class image_publisher
diff --git a/realsense2_camera/include/profile_manager.h b/realsense2_camera/include/profile_manager.h
index b753ff67a9..3021e95f6c 100644
--- a/realsense2_camera/include/profile_manager.h
+++ b/realsense2_camera/include/profile_manager.h
@@ -103,12 +103,4 @@ namespace realsense2_camera
protected:
std::map > _fps;
};
-
- class PoseProfilesManager : public MotionProfilesManager
- {
- public:
- using MotionProfilesManager::MotionProfilesManager;
- void registerProfileParameters(std::vector all_profiles, std::function update_sensor_func) override;
- };
-
}
diff --git a/realsense2_camera/include/ros_utils.h b/realsense2_camera/include/ros_utils.h
index 4df1def396..feccd4647d 100644
--- a/realsense2_camera/include/ros_utils.h
+++ b/realsense2_camera/include/ros_utils.h
@@ -31,12 +31,8 @@ namespace realsense2_camera
const stream_index_pair INFRA0{RS2_STREAM_INFRARED, 0};
const stream_index_pair INFRA1{RS2_STREAM_INFRARED, 1};
const stream_index_pair INFRA2{RS2_STREAM_INFRARED, 2};
- const stream_index_pair FISHEYE{RS2_STREAM_FISHEYE, 0};
- const stream_index_pair FISHEYE1{RS2_STREAM_FISHEYE, 1};
- const stream_index_pair FISHEYE2{RS2_STREAM_FISHEYE, 2};
const stream_index_pair GYRO{RS2_STREAM_GYRO, 0};
const stream_index_pair ACCEL{RS2_STREAM_ACCEL, 0};
- const stream_index_pair POSE{RS2_STREAM_POSE, 0};
bool isValidCharInName(char c);
diff --git a/realsense2_camera/launch/default.rviz b/realsense2_camera/launch/default.rviz
index 055431f228..c4374772c6 100644
--- a/realsense2_camera/launch/default.rviz
+++ b/realsense2_camera/launch/default.rviz
@@ -9,13 +9,15 @@ Panels:
- /Grid1
- /PointCloud21
- /Image1
+ - /Image2
+ - /Image3
+ - /Image4
Splitter Ratio: 0.5
- Tree Height: 222
+ Tree Height: 190
- Class: rviz_common/Selection
Name: Selection
- Class: rviz_common/Tool Properties
Expanded:
- - /2D Nav Goal1
- /Publish Point1
Name: Tool Properties
Splitter Ratio: 0.5886790156364441
@@ -65,13 +67,17 @@ Visualization Manager:
Min Intensity: 0
Name: PointCloud2
Position Transformer: XYZ
- Queue Size: 10
Selectable: true
Size (Pixels): 3
Size (m): 0.009999999776482582
Style: Flat Squares
- Topic: /camera/depth/color/points
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ Filter size: 10
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/color/points
Use Fixed Frame: true
Use rainbow: true
Value: true
@@ -82,9 +88,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/color/image_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/color/image_raw
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -93,9 +102,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/depth/image_rect_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/image_rect_raw
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -104,9 +116,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/infra1/image_rect_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/infra1/image_rect_raw
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -115,9 +130,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/infra2/image_rect_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/infra2/image_rect_raw
Value: true
Enabled: true
Global Options:
@@ -132,12 +150,30 @@ Visualization Manager:
- Class: rviz_default_plugins/Measure
Line color: 128; 128; 0
- Class: rviz_default_plugins/SetInitialPose
- Topic: /initialpose
+ Covariance x: 0.25
+ Covariance y: 0.25
+ Covariance yaw: 0.06853891909122467
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /initialpose
- Class: rviz_default_plugins/SetGoal
- Topic: /move_base_simple/goal
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /move_base_simple/goal
- Class: rviz_default_plugins/PublishPoint
Single click: true
- Topic: /clicked_point
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /clicked_point
Transformation:
Current:
Class: rviz_default_plugins/TF
@@ -145,7 +181,7 @@ Visualization Manager:
Views:
Current:
Class: rviz_default_plugins/Orbit
- Distance: 1.0510121583938599
+ Distance: 3.677529811859131
Enable Stereo Rendering:
Stereo Eye Separation: 0.05999999865889549
Stereo Focal Distance: 1
@@ -168,18 +204,18 @@ Visualization Manager:
Window Geometry:
Displays:
collapsed: false
- Height: 1025
+ Height: 1016
Hide Left Dock: false
Hide Right Dock: true
Image:
collapsed: false
- QMainWindow State: 000000ff00000000fd000000040000000000000156000003a7fc0200000010fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d0000011b000000c900fffffffb0000000a0049006d006100670065010000015e0000009b0000002800fffffffb0000000a0049006d00610067006501000001ff000000b20000002800fffffffb0000000a0049006d00610067006501000002b70000009a0000002800fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d00610067006501000003570000008d0000002800fffffffb0000000a0049006d00610067006501000001940000005d0000000000000000fb0000000a0049006d00610067006501000001f70000007a0000000000000000fb0000000a0049006d00610067006501000002770000009d0000000000000000fb0000000a0049006d006100670065010000031a000000ca0000000000000000000000010000010f000001effc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000001ef000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b20000000000000000000000020000073d000000a9fc0100000002fb0000000a0049006d00610067006503000001c5000000bb000001f8000001b0fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000005e1000003a700000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ QMainWindow State: 000000ff00000000fd0000000400000000000001560000039efc0200000010fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d000000fb000000c900fffffffb0000000a0049006d006100670065010000011e000000780000000000000000fb0000000a0049006d006100670065010000013e000000cd0000002800fffffffb0000000a0049006d0061006700650100000211000000b10000002800fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d00610067006501000002c8000000a20000002800fffffffb0000000a0049006d00610067006501000003700000006b0000002800fffffffb0000000a0049006d00610067006501000001f70000007a0000000000000000fb0000000a0049006d00610067006501000002770000009d0000000000000000fb0000000a0049006d006100670065010000031a000000ca0000000000000000000000010000010f000001effc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000001ef000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b20000000000000000000000020000073d000000a9fc0100000002fb0000000a0049006d00610067006503000001c5000000bb000001f8000001b0fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000005da0000039e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
Selection:
collapsed: false
Tool Properties:
collapsed: false
Views:
collapsed: true
- Width: 1853
- X: 67
+ Width: 1846
+ X: 74
Y: 27
diff --git a/realsense2_camera/launch/rs_intra_process_demo_launch.py b/realsense2_camera/launch/rs_intra_process_demo_launch.py
index 103afee220..a4e54391a0 100644
--- a/realsense2_camera/launch/rs_intra_process_demo_launch.py
+++ b/realsense2_camera/launch/rs_intra_process_demo_launch.py
@@ -41,22 +41,32 @@
'\nplease make sure you run "colcon build --cmake-args \'-DBUILD_TOOLS=ON\'" command before running this launch file')
-configurable_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
- {'name': 'serial_no', 'default': "''", 'description': 'choose device by serial number'},
- {'name': 'usb_port_id', 'default': "''", 'description': 'choose device by usb port id'},
- {'name': 'device_type', 'default': "''", 'description': 'choose device by type'},
- {'name': 'log_level', 'default': 'info', 'description': 'debug log level [DEBUG|INFO|WARN|ERROR|FATAL]'},
- {'name': 'rgb_camera.profile', 'default': '0,0,0', 'description': 'color image width'},
- {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
- {'name': 'enable_depth', 'default': 'false', 'description': 'enable depth stream'},
- {'name': 'enable_infra1', 'default': 'false', 'description': 'enable infra1 stream'},
- {'name': 'enable_infra2', 'default': 'false', 'description': 'enable infra2 stream'},
- {'name': 'enable_gyro', 'default': 'false', 'description': "enable gyro stream"},
- {'name': 'enable_accel', 'default': 'false', 'description': "enable accel stream"},
- {'name': 'intra_process_comms', 'default': 'true', 'description': "enable intra-process communication"},
- {'name': 'publish_tf', 'default': 'true', 'description': '[bool] enable/disable publishing static & dynamic TF'},
- {'name': 'tf_publish_rate', 'default': '0.0', 'description': '[double] rate in HZ for publishing dynamic TF'},
- ]
+realsense_node_params = [{'name': 'serial_no', 'default': "''", 'description': 'choose device by serial number'},
+ {'name': 'usb_port_id', 'default': "''", 'description': 'choose device by usb port id'},
+ {'name': 'device_type', 'default': "''", 'description': 'choose device by type'},
+ {'name': 'log_level', 'default': 'info', 'description': 'debug log level [DEBUG|INFO|WARN|ERROR|FATAL]'},
+ {'name': 'rgb_camera.profile', 'default': '0,0,0', 'description': 'color image width'},
+ {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'enable_infra', 'default': 'false', 'description': 'enable infra stream'},
+ {'name': 'enable_infra1', 'default': 'true', 'description': 'enable infra1 stream'},
+ {'name': 'enable_infra2', 'default': 'true', 'description': 'enable infra2 stream'},
+ {'name': 'enable_gyro', 'default': 'true', 'description': "enable gyro stream"},
+ {'name': 'enable_accel', 'default': 'true', 'description': "enable accel stream"},
+ {'name': 'unite_imu_method', 'default': "1", 'description': '[0-None, 1-copy, 2-linear_interpolation]'},
+ {'name': 'intra_process_comms', 'default': 'true', 'description': "enable intra-process communication"},
+ {'name': 'enable_sync', 'default': 'true', 'description': "'enable sync mode'"},
+ {'name': 'pointcloud.enable', 'default': 'true', 'description': ''},
+ {'name': 'enable_rgbd', 'default': 'true', 'description': "'enable rgbd topic'"},
+ {'name': 'align_depth.enable', 'default': 'true', 'description': "'enable align depth filter'"},
+ {'name': 'publish_tf', 'default': 'true', 'description': '[bool] enable/disable publishing static & dynamic TF'},
+ {'name': 'tf_publish_rate', 'default': '1.0', 'description': '[double] rate in HZ for publishing dynamic TF'},
+ ]
+
+frame_latency_node_params = [{'name': 'topic_name', 'default': '/camera/color/image_raw', 'description': 'topic to which latency calculated'},
+ {'name': 'topic_type', 'default': 'image', 'description': 'topic type [image|points|imu|metadata|camera_info|rgbd|imu_info|tf]'},
+ ]
+
def declare_configurable_parameters(parameters):
return [DeclareLaunchArgument(param['name'], default_value=param['default'], description=param['description']) for param in parameters]
@@ -68,7 +78,8 @@ def set_configurable_parameters(parameters):
def generate_launch_description():
- return LaunchDescription(declare_configurable_parameters(configurable_parameters) + [
+ return LaunchDescription(declare_configurable_parameters(realsense_node_params) +
+ declare_configurable_parameters(frame_latency_node_params) +[
ComposableNodeContainer(
name='my_container',
namespace='',
@@ -80,14 +91,14 @@ def generate_launch_description():
namespace='',
plugin='realsense2_camera::' + rs_node_class,
name="camera",
- parameters=[set_configurable_parameters(configurable_parameters)],
+ parameters=[set_configurable_parameters(realsense_node_params)],
extra_arguments=[{'use_intra_process_comms': LaunchConfiguration("intra_process_comms")}]) ,
ComposableNode(
package='realsense2_camera',
namespace='',
plugin='rs2_ros::tools::frame_latency::' + rs_latency_tool_class,
name='frame_latency',
- parameters=[set_configurable_parameters(configurable_parameters)],
+ parameters=[set_configurable_parameters(frame_latency_node_params)],
extra_arguments=[{'use_intra_process_comms': LaunchConfiguration("intra_process_comms")}]) ,
],
output='screen',
diff --git a/realsense2_camera/launch/rs_launch.py b/realsense2_camera/launch/rs_launch.py
index ef0f167f7e..c6c14db6c4 100644
--- a/realsense2_camera/launch/rs_launch.py
+++ b/realsense2_camera/launch/rs_launch.py
@@ -40,8 +40,6 @@
{'name': 'enable_infra', 'default': 'false', 'description': 'enable infra0 stream'},
{'name': 'enable_infra1', 'default': 'false', 'description': 'enable infra1 stream'},
{'name': 'enable_infra2', 'default': 'false', 'description': 'enable infra2 stream'},
- {'name': 'enable_fisheye1', 'default': 'true', 'description': 'enable fisheye1 stream'},
- {'name': 'enable_fisheye2', 'default': 'true', 'description': 'enable fisheye2 stream'},
{'name': 'depth_module.profile', 'default': '0,0,0', 'description': 'depth module profile'},
{'name': 'depth_module.depth_format', 'default': 'Z16', 'description': 'depth stream format'},
{'name': 'depth_module.infra_format', 'default': 'RGB8', 'description': 'infra0 stream format'},
@@ -55,7 +53,6 @@
{'name': 'depth_module.gain.1', 'default': '16', 'description': 'Depth module first gain value. Used for hdr_merge filter'},
{'name': 'depth_module.exposure.2', 'default': '1', 'description': 'Depth module second exposure value. Used for hdr_merge filter'},
{'name': 'depth_module.gain.2', 'default': '16', 'description': 'Depth module second gain value. Used for hdr_merge filter'},
- {'name': 'enable_confidence', 'default': 'true', 'description': 'enable confidence'},
{'name': 'enable_sync', 'default': 'false', 'description': "'enable sync mode'"},
{'name': 'enable_rgbd', 'default': 'false', 'description': "'enable rgbd topic'"},
{'name': 'enable_gyro', 'default': 'false', 'description': "'enable gyro stream'"},
@@ -63,8 +60,6 @@
{'name': 'gyro_fps', 'default': '0', 'description': "''"},
{'name': 'accel_fps', 'default': '0', 'description': "''"},
{'name': 'unite_imu_method', 'default': "0", 'description': '[0-None, 1-copy, 2-linear_interpolation]'},
- {'name': 'enable_pose', 'default': 'true', 'description': "'enable pose stream'"},
- {'name': 'pose_fps', 'default': '200', 'description': "''"},
{'name': 'clip_distance', 'default': '-2.', 'description': "''"},
{'name': 'angular_velocity_cov', 'default': '0.01', 'description': "''"},
{'name': 'linear_accel_cov', 'default': '0.01', 'description': "''"},
@@ -101,37 +96,18 @@ def yaml_to_dict(path_to_yaml):
def launch_setup(context, params, param_name_suffix=''):
_config_file = LaunchConfiguration('config_file' + param_name_suffix).perform(context)
params_from_file = {} if _config_file == "''" else yaml_to_dict(_config_file)
- # Realsense
- if (os.getenv('ROS_DISTRO') == "dashing") or (os.getenv('ROS_DISTRO') == "eloquent"):
- return [
- launch_ros.actions.Node(
- package='realsense2_camera',
- node_namespace=LaunchConfiguration('camera_namespace' + param_name_suffix),
- node_name=LaunchConfiguration('camera_name' + param_name_suffix),
- node_executable='realsense2_camera_node',
- prefix=['stdbuf -o L'],
- parameters=[params
- , params_from_file
- ],
- output=LaunchConfiguration('output' + param_name_suffix),
- arguments=['--ros-args', '--log-level', LaunchConfiguration('log_level' + param_name_suffix)],
- )
- ]
- else:
- return [
- launch_ros.actions.Node(
- package='realsense2_camera',
- namespace=LaunchConfiguration('camera_namespace' + param_name_suffix),
- name=LaunchConfiguration('camera_name' + param_name_suffix),
- executable='realsense2_camera_node',
- parameters=[params
- , params_from_file
- ],
- output=LaunchConfiguration('output' + param_name_suffix),
- arguments=['--ros-args', '--log-level', LaunchConfiguration('log_level' + param_name_suffix)],
- emulate_tty=True,
- )
- ]
+ return [
+ launch_ros.actions.Node(
+ package='realsense2_camera',
+ namespace=LaunchConfiguration('camera_namespace' + param_name_suffix),
+ name=LaunchConfiguration('camera_name' + param_name_suffix),
+ executable='realsense2_camera_node',
+ parameters=[params, params_from_file],
+ output=LaunchConfiguration('output' + param_name_suffix),
+ arguments=['--ros-args', '--log-level', LaunchConfiguration('log_level' + param_name_suffix)],
+ emulate_tty=True,
+ )
+ ]
def generate_launch_description():
return LaunchDescription(declare_configurable_parameters(configurable_parameters) + [
diff --git a/realsense2_camera/package.xml b/realsense2_camera/package.xml
index faedde2dd2..6db46cff68 100644
--- a/realsense2_camera/package.xml
+++ b/realsense2_camera/package.xml
@@ -3,7 +3,7 @@
realsense2_camera4.54.1
- RealSense camera package allowing access to Intel SR300 and D400 3D cameras
+ RealSense camera package allowing access to Intel D400 3D camerasLibRealSense ROS TeamApache License 2.0
@@ -38,7 +38,8 @@
sensor_msgs_pypython3-requeststf2_ros_py
-
+ ros2topic
+
launch_rosros_environment
diff --git a/realsense2_camera/scripts/rs2_listener.py b/realsense2_camera/scripts/rs2_listener.py
index 1fcbc97834..9cc356f7bd 100644
--- a/realsense2_camera/scripts/rs2_listener.py
+++ b/realsense2_camera/scripts/rs2_listener.py
@@ -19,18 +19,12 @@
from rclpy import qos
from sensor_msgs.msg import Image as msg_Image
import numpy as np
-import inspect
import ctypes
import struct
import quaternion
-import os
-if (os.getenv('ROS_DISTRO') != "dashing"):
- import tf2_ros
-if (os.getenv('ROS_DISTRO') == "humble"):
- from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
- from sensor_msgs_py import point_cloud2 as pc2
-# from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
-# import sensor_msgs.point_cloud2 as pc2
+import tf2_ros
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+from sensor_msgs_py import point_cloud2 as pc2
from sensor_msgs.msg import Imu as msg_Imu
try:
@@ -220,9 +214,8 @@ def wait_for_messages(self, themes):
node.get_logger().info('Subscribing %s on topic: %s' % (theme_name, theme['topic']))
self.func_data[theme_name]['sub'] = node.create_subscription(theme['msg_type'], theme['topic'], theme['callback'](theme_name), qos.qos_profile_sensor_data)
- if (os.getenv('ROS_DISTRO') != "dashing"):
- self.tfBuffer = tf2_ros.Buffer()
- self.tf_listener = tf2_ros.TransformListener(self.tfBuffer, node)
+ self.tfBuffer = tf2_ros.Buffer()
+ self.tf_listener = tf2_ros.TransformListener(self.tfBuffer, node)
self.prev_time = time.time()
break_timeout = False
diff --git a/realsense2_camera/scripts/rs2_test.py b/realsense2_camera/scripts/rs2_test.py
index 697f72ad5b..b1351aba66 100644
--- a/realsense2_camera/scripts/rs2_test.py
+++ b/realsense2_camera/scripts/rs2_test.py
@@ -21,8 +21,7 @@
from rclpy.node import Node
from importRosbag.importRosbag import importRosbag
import numpy as np
-if (os.getenv('ROS_DISTRO') != "dashing"):
- import tf2_ros
+import tf2_ros
import itertools
import subprocess
import time
@@ -277,20 +276,16 @@ def print_results(results):
def get_tfs(coupled_frame_ids):
res = dict()
- if (os.getenv('ROS_DISTRO') == "dashing"):
- for couple in coupled_frame_ids:
+ tfBuffer = tf2_ros.Buffer()
+ node = Node('tf_listener')
+ listener = tf2_ros.TransformListener(tfBuffer, node)
+ rclpy.spin_once(node)
+ for couple in coupled_frame_ids:
+ from_id, to_id = couple
+ if (tfBuffer.can_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=3e6))):
+ res[couple] = tfBuffer.lookup_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=1e6)).transform
+ else:
res[couple] = None
- else:
- tfBuffer = tf2_ros.Buffer()
- node = Node('tf_listener')
- listener = tf2_ros.TransformListener(tfBuffer, node)
- rclpy.spin_once(node)
- for couple in coupled_frame_ids:
- from_id, to_id = couple
- if (tfBuffer.can_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=3e6))):
- res[couple] = tfBuffer.lookup_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=1e6)).transform
- else:
- res[couple] = None
return res
def kill_realsense2_camera_node():
@@ -338,7 +333,7 @@ def run_tests(tests):
listener_res = msg_retriever.wait_for_messages(themes)
if 'static_tf' in [test['type'] for test in rec_tests]:
print ('Gathering static transforms')
- frame_ids = ['camera_link', 'camera_depth_frame', 'camera_infra1_frame', 'camera_infra2_frame', 'camera_color_frame', 'camera_fisheye_frame', 'camera_pose']
+ frame_ids = ['camera_link', 'camera_depth_frame', 'camera_infra1_frame', 'camera_infra2_frame', 'camera_color_frame']
coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
listener_res['static_tf'] = get_tfs(coupled_frame_ids)
@@ -372,17 +367,12 @@ def main():
#{'name': 'points_cloud_1', 'type': 'pointscloud_avg', 'params': {'rosbag_filename': outdoors_filename, 'pointcloud.enable': 'true'}},
{'name': 'depth_w_cloud_1', 'type': 'depth_avg', 'params': {'rosbag_filename': outdoors_filename, 'pointcloud.enable': 'true'}},
{'name': 'align_depth_color_1', 'type': 'align_depth_color', 'params': {'rosbag_filename': outdoors_filename, 'align_depth.enable':'true'}},
- {'name': 'align_depth_ir1_1', 'type': 'align_depth_ir1', 'params': {'rosbag_filename': outdoors_filename, 'align_depth.enable': 'true',
- 'enable_infra1':'true', 'enable_infra2':'true'}},
+ {'name': 'align_depth_ir1_1', 'type': 'align_depth_ir1', 'params': {'rosbag_filename': outdoors_filename, 'align_depth.enable': 'true', 'enable_infra1':'true', 'enable_infra2':'true'}},
{'name': 'depth_avg_decimation_1', 'type': 'depth_avg_decimation', 'params': {'rosbag_filename': outdoors_filename, 'decimation_filter.enable':'true'}},
{'name': 'align_depth_ir1_decimation_1', 'type': 'align_depth_ir1_decimation', 'params': {'rosbag_filename': outdoors_filename, 'align_depth.enable':'true', 'decimation_filter.enable':'true'}},
- ]
- if (os.getenv('ROS_DISTRO') != "dashing"):
- all_tests.extend([
- {'name': 'static_tf_1', 'type': 'static_tf', 'params': {'rosbag_filename': outdoors_filename,
- 'enable_infra1':'true', 'enable_infra2':'true'}},
- {'name': 'accel_up_1', 'type': 'accel_up', 'params': {'rosbag_filename': './records/D435i_Depth_and_IMU_Stands_still.bag', 'enable_accel': 'true', 'accel_fps': '0.0'}},
- ])
+ {'name': 'static_tf_1', 'type': 'static_tf', 'params': {'rosbag_filename': outdoors_filename, 'enable_infra1':'true', 'enable_infra2':'true'}},
+ {'name': 'accel_up_1', 'type': 'accel_up', 'params': {'rosbag_filename': './records/D435i_Depth_and_IMU_Stands_still.bag', 'enable_accel': 'true', 'accel_fps': '0.0'}},
+ ]
# Normalize parameters:
for test in all_tests:
diff --git a/realsense2_camera/scripts/show_center_depth.py b/realsense2_camera/scripts/show_center_depth.py
index fb957664f4..acf8ac0db2 100644
--- a/realsense2_camera/scripts/show_center_depth.py
+++ b/realsense2_camera/scripts/show_center_depth.py
@@ -31,8 +31,6 @@ def __init__(self, depth_image_topic, depth_info_topic):
self.bridge = CvBridge()
self.sub = self.create_subscription(msg_Image, depth_image_topic, self.imageDepthCallback, 1)
self.sub_info = self.create_subscription(CameraInfo, depth_info_topic, self.imageDepthInfoCallback, 1)
- confidence_topic = depth_image_topic.replace('depth', 'confidence')
- self.sub_conf = self.create_subscription(msg_Image, confidence_topic, self.confidenceCallback, 1)
self.intrinsics = None
self.pix = None
self.pix_grade = None
@@ -62,17 +60,6 @@ def imageDepthCallback(self, data):
except ValueError as e:
return
- def confidenceCallback(self, data):
- try:
- cv_image = self.bridge.imgmsg_to_cv2(data, data.encoding)
- grades = np.bitwise_and(cv_image >> 4, 0x0f)
- if (self.pix):
- self.pix_grade = grades[self.pix[1], self.pix[0]]
- except CvBridgeError as e:
- print(e)
- return
-
-
def imageDepthInfoCallback(self, cameraInfo):
try:
@@ -106,7 +93,6 @@ def main():
print ('Application subscribes to %s and %s topics.' % (depth_image_topic, depth_info_topic))
print ('Application then calculates and print the range to the closest object.')
print ('If intrinsics data is available, it also prints the 3D location of the object')
- print ('If a confedence map is also available in the topic %s, it also prints the confidence grade.' % depth_image_topic.replace('depth', 'confidence'))
print ()
listener = ImageListener(depth_image_topic, depth_info_topic)
diff --git a/realsense2_camera/src/base_realsense_node.cpp b/realsense2_camera/src/base_realsense_node.cpp
index 622a7a6d8e..8d899f5427 100755
--- a/realsense2_camera/src/base_realsense_node.cpp
+++ b/realsense2_camera/src/base_realsense_node.cpp
@@ -22,10 +22,7 @@
// Header files for disabling intra-process comms for static broadcaster.
#include
-// This header file is not available in ROS 2 Dashing.
-#ifndef DASHING
#include
-#endif
using namespace realsense2_camera;
@@ -115,7 +112,6 @@ BaseRealSenseNode::BaseRealSenseNode(rclcpp::Node& node,
_is_accel_enabled(false),
_is_gyro_enabled(false),
_pointcloud(false),
- _publish_odom_tf(false),
_imu_sync_method(imu_sync_method::NONE),
_is_profile_changed(false),
_is_align_depth_changed(false)
@@ -630,9 +626,6 @@ void BaseRealSenseNode::multiple_message_callback(rs2::frame frame, imu_sync_met
if (sync_method > imu_sync_method::NONE) imu_callback_sync(frame, sync_method);
else imu_callback(frame);
break;
- case RS2_STREAM_POSE:
- pose_callback(frame);
- break;
default:
frame_callback(frame);
}
@@ -672,17 +665,7 @@ rclcpp::Time BaseRealSenseNode::frameSystemTimeSec(rs2::frame frame)
{
double elapsed_camera_ns = millisecondsToNanoseconds(timestamp_ms - _camera_time_base);
- /*
- Fixing deprecated-declarations compilation warning.
- Duration(rcl_duration_value_t) is deprecated in favor of
- static Duration::from_nanoseconds(rcl_duration_value_t)
- starting from GALACTIC.
- */
-#if defined(FOXY) || defined(ELOQUENT) || defined(DASHING)
- auto duration = rclcpp::Duration(elapsed_camera_ns);
-#else
auto duration = rclcpp::Duration::from_nanoseconds(elapsed_camera_ns);
-#endif
return rclcpp::Time(_ros_time_base + duration);
}
@@ -788,7 +771,7 @@ void BaseRealSenseNode::updateExtrinsicsCalibData(const rs2::video_stream_profil
void BaseRealSenseNode::SetBaseStream()
{
- const std::vector base_stream_priority = {DEPTH, POSE};
+ const std::vector base_stream_priority = {DEPTH};
std::set checked_sips;
std::map available_profiles;
for(auto&& sensor : _available_ros_sensors)
@@ -896,7 +879,6 @@ bool BaseRealSenseNode::fillROSImageMsgAndReturnStatus(
img_msg_ptr->width = width;
img_msg_ptr->is_bigendian = false;
img_msg_ptr->step = width * cv_matrix_image.elemSize();
-
return true;
}
diff --git a/realsense2_camera/src/dynamic_params.cpp b/realsense2_camera/src/dynamic_params.cpp
index 618da0766a..acce9038ca 100644
--- a/realsense2_camera/src/dynamic_params.cpp
+++ b/realsense2_camera/src/dynamic_params.cpp
@@ -113,11 +113,7 @@ namespace realsense2_camera
try
{
ROS_DEBUG_STREAM("setParam::Setting parameter: " << param_name);
-#if defined(DASHING) || defined(ELOQUENT) || defined(FOXY)
- //do nothing for old versions
-#else
- descriptor.dynamic_typing=true; // Without this, undeclare_parameter() throws in Galactic onward.
-#endif
+ descriptor.dynamic_typing=true; // Without this, undeclare_parameter() throws error.
if (!_node.get_parameter(param_name, result_value))
{
result_value = _node.declare_parameter(param_name, initial_value, descriptor);
diff --git a/realsense2_camera/src/parameters.cpp b/realsense2_camera/src/parameters.cpp
index c5d1abef3a..72523cb801 100644
--- a/realsense2_camera/src/parameters.cpp
+++ b/realsense2_camera/src/parameters.cpp
@@ -23,6 +23,7 @@ void BaseRealSenseNode::getParameters()
ROS_INFO("getParameters...");
std::string param_name;
+
param_name = std::string("camera_name");
_camera_name = _parameters->setParam(param_name, "camera");
_parameters_names.push_back(param_name);
@@ -77,10 +78,6 @@ void BaseRealSenseNode::getParameters()
_hold_back_imu_for_frames = _parameters->setParam(param_name, HOLD_BACK_IMU_FOR_FRAMES);
_parameters_names.push_back(param_name);
- param_name = std::string("publish_odom_tf");
- _publish_odom_tf = _parameters->setParam(param_name, PUBLISH_ODOM_TF);
- _parameters_names.push_back(param_name);
-
param_name = std::string("base_frame_id");
_base_frame_id = _parameters->setParam(param_name, DEFAULT_BASE_FRAME_ID);
_base_frame_id = (static_cast(std::ostringstream() << _camera_name << "_" << _base_frame_id)).str();
diff --git a/realsense2_camera/src/profile_manager.cpp b/realsense2_camera/src/profile_manager.cpp
index beffd2d574..95dadbbad6 100644
--- a/realsense2_camera/src/profile_manager.cpp
+++ b/realsense2_camera/src/profile_manager.cpp
@@ -591,20 +591,3 @@ void MotionProfilesManager::registerFPSParams()
}
}
-///////////////////////////////////////////////////////////////////////////////////////
-
-void PoseProfilesManager::registerProfileParameters(std::vector all_profiles, std::function update_sensor_func)
-{
- std::set checked_sips;
- for (auto& profile : all_profiles)
- {
- if (!profile.is()) continue;
- _all_profiles.push_back(profile);
- stream_index_pair sip(profile.stream_type(), profile.stream_index());
- checked_sips.insert(sip);
- }
- registerSensorUpdateParam("enable_%s", checked_sips, _enabled_profiles, true, update_sensor_func);
- registerSensorUpdateParam("%s_fps", checked_sips, _fps, 0, update_sensor_func);
- registerSensorQOSParam("%s_qos", checked_sips, _profiles_image_qos_str, HID_QOS);
- registerSensorQOSParam("%s_info_qos", checked_sips, _profiles_info_qos_str, DEFAULT_QOS);
-}
diff --git a/realsense2_camera/src/realsense_node_factory.cpp b/realsense2_camera/src/realsense_node_factory.cpp
index 46b1d13ce1..e53b459551 100644
--- a/realsense2_camera/src/realsense_node_factory.cpp
+++ b/realsense2_camera/src/realsense_node_factory.cpp
@@ -358,8 +358,6 @@ void RealSenseNodeFactory::startDevice()
{
switch(pid)
{
- case SR300_PID:
- case SR300v2_PID:
case RS400_PID:
case RS405_PID:
case RS410_PID:
@@ -377,9 +375,6 @@ void RealSenseNodeFactory::startDevice()
case RS457_PID:
case RS465_PID:
case RS_USB2_PID:
- case RS_L515_PID_PRE_PRQ:
- case RS_L515_PID:
- case RS_L535_PID:
_realSenseNode = std::unique_ptr(new BaseRealSenseNode(*this, _device, _parameters, this->get_node_options().use_intra_process_comms()));
break;
default:
diff --git a/realsense2_camera/src/ros_param_backend_foxy.cpp b/realsense2_camera/src/ros_param_backend.cpp
similarity index 100%
rename from realsense2_camera/src/ros_param_backend_foxy.cpp
rename to realsense2_camera/src/ros_param_backend.cpp
diff --git a/realsense2_camera/src/ros_param_backend_dashing.cpp b/realsense2_camera/src/ros_param_backend_dashing.cpp
deleted file mode 100644
index 8ecf8e7e8f..0000000000
--- a/realsense2_camera/src/ros_param_backend_dashing.cpp
+++ /dev/null
@@ -1,34 +0,0 @@
-// Copyright 2023 Intel Corporation. All Rights Reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "ros_param_backend.h"
-
-namespace realsense2_camera
-{
- void ParametersBackend::add_on_set_parameters_callback(ros2_param_callback_type callback)
- {
- rclcpp::Node::OnParametersSetCallbackType prev_callback = _node.set_on_parameters_set_callback(callback);
- if (prev_callback)
- {
- rclcpp::Node::OnParametersSetCallbackType prev_callback = _node.set_on_parameters_set_callback(prev_callback);
- std::stringstream msg;
- msg << "Cannot set another callback to current node: " << _node.get_name();
- throw std::runtime_error(msg.str());
- }
- }
-
- ParametersBackend::~ParametersBackend()
- {
- }
-}
diff --git a/realsense2_camera/src/ros_param_backend_rolling.cpp b/realsense2_camera/src/ros_param_backend_rolling.cpp
deleted file mode 100644
index e3998e9808..0000000000
--- a/realsense2_camera/src/ros_param_backend_rolling.cpp
+++ /dev/null
@@ -1,32 +0,0 @@
-// Copyright 2023 Intel Corporation. All Rights Reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "ros_param_backend.h"
-
-namespace realsense2_camera
-{
- void ParametersBackend::add_on_set_parameters_callback(ros2_param_callback_type callback)
- {
- _ros_callback = _node.add_on_set_parameters_callback(callback);
- }
-
- ParametersBackend::~ParametersBackend()
- {
- if (_ros_callback)
- {
- _node.remove_on_set_parameters_callback((rclcpp::node_interfaces::OnSetParametersCallbackHandle*)(_ros_callback.get()));
- _ros_callback.reset();
- }
- }
-}
diff --git a/realsense2_camera/src/ros_sensor.cpp b/realsense2_camera/src/ros_sensor.cpp
index d860a79dec..635424d8c5 100644
--- a/realsense2_camera/src/ros_sensor.cpp
+++ b/realsense2_camera/src/ros_sensor.cpp
@@ -176,12 +176,6 @@ void RosSensor::registerSensorParameters()
{
_profile_managers.push_back(profile_manager);
}
- profile_manager = std::make_shared(_params.getParameters(), _logger);
- profile_manager->registerProfileParameters(all_profiles, _update_sensor_func);
- if (profile_manager->isTypeExist())
- {
- _profile_managers.push_back(profile_manager);
- }
}
void RosSensor::runFirstFrameInitialization()
diff --git a/realsense2_camera/src/ros_utils.cpp b/realsense2_camera/src/ros_utils.cpp
index b4661126cf..0ee2172904 100644
--- a/realsense2_camera/src/ros_utils.cpp
+++ b/realsense2_camera/src/ros_utils.cpp
@@ -111,10 +111,8 @@ static const rmw_qos_profile_t rmw_qos_profile_latched =
const rmw_qos_profile_t qos_string_to_qos(std::string str)
{
-#if !defined(DASHING) && !defined(ELOQUENT)
if (str == "UNKNOWN")
return rmw_qos_profile_unknown;
-#endif
if (str == "SYSTEM_DEFAULT")
return rmw_qos_profile_system_default;
if (str == "DEFAULT")
@@ -133,10 +131,8 @@ const rmw_qos_profile_t qos_string_to_qos(std::string str)
const std::string list_available_qos_strings()
{
std::stringstream res;
-#ifndef DASHING
- res << "UNKNOWN" << "\n";
-#endif
- res << "SYSTEM_DEFAULT" << "\n"
+ res << "UNKNOWN" << "\n"
+ << "SYSTEM_DEFAULT" << "\n"
<< "DEFAULT" << "\n"
<< "PARAMETER_EVENTS" << "\n"
<< "SERVICES_DEFAULT" << "\n"
diff --git a/realsense2_camera/src/rs_node_setup.cpp b/realsense2_camera/src/rs_node_setup.cpp
index 6e05d9ad85..d98157f6d9 100755
--- a/realsense2_camera/src/rs_node_setup.cpp
+++ b/realsense2_camera/src/rs_node_setup.cpp
@@ -136,8 +136,7 @@ void BaseRealSenseNode::setAvailableSensors()
const std::string module_name(rs2_to_ros(sensor.get_info(RS2_CAMERA_INFO_NAME)));
std::unique_ptr rosSensor;
if (sensor.is() ||
- sensor.is() ||
- sensor.is())
+ sensor.is())
{
ROS_DEBUG_STREAM("Set " << module_name << " as VideoSensor.");
rosSensor = std::make_unique(sensor, _parameters, frame_callback_function, update_sensor_func, hardware_reset_func, _diagnostics_updater, _logger, _use_intra_process, _dev.is());
@@ -147,11 +146,6 @@ void BaseRealSenseNode::setAvailableSensors()
ROS_DEBUG_STREAM("Set " << module_name << " as ImuSensor.");
rosSensor = std::make_unique(sensor, _parameters, imu_callback_function, update_sensor_func, hardware_reset_func, _diagnostics_updater, _logger, false, _dev.is());
}
- else if (sensor.is())
- {
- ROS_DEBUG_STREAM("Set " << module_name << " as PoseSensor.");
- rosSensor = std::make_unique(sensor, _parameters, multiple_message_callback_function, update_sensor_func, hardware_reset_func, _diagnostics_updater, _logger, false, _dev.is());
- }
else
{
ROS_WARN_STREAM("Module Name \"" << module_name << "\" does not define a callback.");
@@ -285,15 +279,8 @@ void BaseRealSenseNode::startPublishers(const std::vector& profi
IMUInfo info_msg = getImuInfo(profile);
_imu_info_publishers[sip]->publish(info_msg);
}
- else if (profile.is())
- {
- std::stringstream data_topic_name, info_topic_name;
- data_topic_name << "~/" << stream_name << "/sample";
- _odom_publisher = _node.create_publisher(data_topic_name.str(),
- rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(qos), qos));
- }
std::string topic_metadata("~/" + stream_name + "/metadata");
- _metadata_publishers[sip] = _node.create_publisher(topic_metadata,
+ _metadata_publishers[sip] = _node.create_publisher(topic_metadata,
rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(info_qos), info_qos));
if (!((rs2::stream_profile)profile==(rs2::stream_profile)_base_profile))
@@ -351,7 +338,7 @@ void BaseRealSenseNode::updateSensors()
std::vector wanted_profiles;
bool is_profile_changed(sensor->getUpdatedProfiles(wanted_profiles));
- bool is_video_sensor = (sensor->is() || sensor->is() || sensor->is());
+ bool is_video_sensor = (sensor->is() || sensor->is());
// do all updates if profile has been changed, or if the align depth filter status has been changed
// and we are on a video sensor. TODO: explore better options to monitor and update changes
diff --git a/realsense2_camera/src/tfs.cpp b/realsense2_camera/src/tfs.cpp
index 3e82666c9f..8e6201761b 100644
--- a/realsense2_camera/src/tfs.cpp
+++ b/realsense2_camera/src/tfs.cpp
@@ -31,15 +31,9 @@ void BaseRealSenseNode::restartStaticTransformBroadcaster()
rclcpp::PublisherOptionsWithAllocator> options;
options.use_intra_process_comm = rclcpp::IntraProcessSetting::Disable;
- #ifndef DASHING
_static_tf_broadcaster = std::make_shared(_node,
tf2_ros::StaticBroadcasterQoS(),
std::move(options));
- #else
- _static_tf_broadcaster = std::make_shared(_node,
- rclcpp::QoS(100),
- std::move(options));
- #endif
}
void BaseRealSenseNode::append_static_tf_msg(const rclcpp::Time& t,
@@ -310,90 +304,3 @@ void BaseRealSenseNode::startDynamicTf()
}
}
-void BaseRealSenseNode::pose_callback(rs2::frame frame)
-{
- double frame_time = frame.get_timestamp();
- bool placeholder_false(false);
- if (_is_initialized_time_base.compare_exchange_strong(placeholder_false, true) )
- {
- _is_initialized_time_base = setBaseTime(frame_time, frame.get_frame_timestamp_domain());
- }
-
- ROS_DEBUG("Frame arrived: stream: %s ; index: %d ; Timestamp Domain: %s",
- rs2_stream_to_string(frame.get_profile().stream_type()),
- frame.get_profile().stream_index(),
- rs2_timestamp_domain_to_string(frame.get_frame_timestamp_domain()));
- rs2_pose pose = frame.as().get_pose_data();
- rclcpp::Time t(frameSystemTimeSec(frame));
-
- geometry_msgs::msg::PoseStamped pose_msg;
- pose_msg.pose.position.x = -pose.translation.z;
- pose_msg.pose.position.y = -pose.translation.x;
- pose_msg.pose.position.z = pose.translation.y;
- pose_msg.pose.orientation.x = -pose.rotation.z;
- pose_msg.pose.orientation.y = -pose.rotation.x;
- pose_msg.pose.orientation.z = pose.rotation.y;
- pose_msg.pose.orientation.w = pose.rotation.w;
-
- static tf2_ros::TransformBroadcaster br(_node);
- geometry_msgs::msg::TransformStamped msg;
- msg.header.stamp = t;
- msg.header.frame_id = DEFAULT_ODOM_FRAME_ID;
- msg.child_frame_id = FRAME_ID(POSE);
- msg.transform.translation.x = pose_msg.pose.position.x;
- msg.transform.translation.y = pose_msg.pose.position.y;
- msg.transform.translation.z = pose_msg.pose.position.z;
- msg.transform.rotation.x = pose_msg.pose.orientation.x;
- msg.transform.rotation.y = pose_msg.pose.orientation.y;
- msg.transform.rotation.z = pose_msg.pose.orientation.z;
- msg.transform.rotation.w = pose_msg.pose.orientation.w;
-
- if (_publish_odom_tf) br.sendTransform(msg);
-
- if (0 != _odom_publisher->get_subscription_count())
- {
- double cov_pose(_linear_accel_cov * pow(10, 3-(int)pose.tracker_confidence));
- double cov_twist(_angular_velocity_cov * pow(10, 1-(int)pose.tracker_confidence));
-
- geometry_msgs::msg::Vector3Stamped v_msg;
- tf2::Vector3 tfv(-pose.velocity.z, -pose.velocity.x, pose.velocity.y);
- tf2::Quaternion q(-msg.transform.rotation.x,
- -msg.transform.rotation.y,
- -msg.transform.rotation.z,
- msg.transform.rotation.w);
- tfv=tf2::quatRotate(q,tfv);
- v_msg.vector.x = tfv.x();
- v_msg.vector.y = tfv.y();
- v_msg.vector.z = tfv.z();
-
- tfv = tf2::Vector3(-pose.angular_velocity.z, -pose.angular_velocity.x, pose.angular_velocity.y);
- tfv=tf2::quatRotate(q,tfv);
- geometry_msgs::msg::Vector3Stamped om_msg;
- om_msg.vector.x = tfv.x();
- om_msg.vector.y = tfv.y();
- om_msg.vector.z = tfv.z();
-
- nav_msgs::msg::Odometry odom_msg;
-
- odom_msg.header.frame_id = DEFAULT_ODOM_FRAME_ID;
- odom_msg.child_frame_id = FRAME_ID(POSE);
- odom_msg.header.stamp = t;
- odom_msg.pose.pose = pose_msg.pose;
- odom_msg.pose.covariance = {cov_pose, 0, 0, 0, 0, 0,
- 0, cov_pose, 0, 0, 0, 0,
- 0, 0, cov_pose, 0, 0, 0,
- 0, 0, 0, cov_twist, 0, 0,
- 0, 0, 0, 0, cov_twist, 0,
- 0, 0, 0, 0, 0, cov_twist};
- odom_msg.twist.twist.linear = v_msg.vector;
- odom_msg.twist.twist.angular = om_msg.vector;
- odom_msg.twist.covariance ={cov_pose, 0, 0, 0, 0, 0,
- 0, cov_pose, 0, 0, 0, 0,
- 0, 0, cov_pose, 0, 0, 0,
- 0, 0, 0, cov_twist, 0, 0,
- 0, 0, 0, 0, cov_twist, 0,
- 0, 0, 0, 0, 0, cov_twist};
- _odom_publisher->publish(odom_msg);
- ROS_DEBUG("Publish %s stream", rs2_stream_to_string(frame.get_profile().stream_type()));
- }
-}
diff --git a/realsense2_camera/test/README.md b/realsense2_camera/test/README.md
index 0be7600c24..ea3a043a92 100644
--- a/realsense2_camera/test/README.md
+++ b/realsense2_camera/test/README.md
@@ -146,7 +146,32 @@ The below command finds the test with the name test_static_tf_1 in realsense2_ca
pytest-3 -s -k test_static_tf_1 realsense2_camera/test/
-### Points to be noted while writing pytests
-The tests that are in one file are nromally run in parallel. So if there are multiple tests in one file, the system capacity can influence the test execution. It's recomended to have 3-4 tests in file, more than that can affect the test results due to delays.
-
+### Marking tests as regression tests
+Some of the tests, especially the live tests with multiple runs, for e.g., all profile tests (test_camera_all_profile_tests.py) take a long time. Such tests are marked are skipped with condition so that "colcon test" skips it.
+If a user wants to add a test to this conditional skip, user can add by adding a marker as below.
+
+@pytest.mark.skipif (os.getenv('RS_ROS_REGRESSION', "not found") == "not found",reason="Regression is not enabled, define RS_ROS_REGRESSION")
+
+### Running skipped regression tests
+1. Set the environment variable RS_ROS_REGRESSION as 1 and run the "colcon test"
+2. pytest example:
+ RS_ROS_REGRESSION=1 PYTHONPATH=$PYTHONPATH:$PWD/realsense2_camera/test/utils:$PWD/realsense2_camera//launch:$PWD/realsense2_camera//scripts pytest-3 -s realsense2_camera/test/live_camera/test_camera_aligned_tests.py -k test_camera_align_depth_color_all -m d415
+
+## Points to be noted while writing pytests
+The tests that are in one file are normally run in parallel, there could also be changes in the pytest plugin. So if there are multiple tests in one file, the system capacity can influence the test execution. It's recomended to have 3-4 tests in file, more than that can affect the test results due to delays.
+### Passing/changing parameters
+The parameters passed while creating the node can be initialized individually for each test, please see the test_parameterized_template example for reference. The default values are taken from rs_launch.py and the passed parameters are used for overriding the default values. The parameters that can be dynamically modified can be changed using the param interface provided. However, the function create_param_ifs has to be called to create this interface. Please see the test_d455_basic_tests.py for reference. There are specific functions to change the string, integer and bool parameters, the utils can be extended if any more types are needed.
+### Difference in setting the bool parameters
+There is a difference between setting the default values of bool parameters and setting them dynamically.
+The bool test params are assinged withn quotes as below.
+ test_params_all_profiles_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_accel':"True",
+ 'enable_gyro':"True",
+ 'unite_imu_method':1,
+ }
+
+However the function that implements the setting of bool parameter dynamically takes the python bool datatype. For example:
+ self.set_bool_param('enable_accel', False)
diff --git a/realsense2_camera/test/live_camera/test_camera_aligned_tests.py b/realsense2_camera/test/live_camera/test_camera_aligned_tests.py
new file mode 100644
index 0000000000..0c17e76e47
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_aligned_tests.py
@@ -0,0 +1,279 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+from pytest_rs_utils import delayed_launch_descr_with_parameters
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+import pytest_live_camera_utils
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+from pytest_live_camera_utils import debug_print
+
+
+
+test_params_align_depth_color_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+test_params_align_depth_color_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D455 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+
+@pytest.mark.parametrize("launch_descr_with_parameters",[
+ pytest.param(test_params_align_depth_color_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_align_depth_color_d415, marks=pytest.mark.d415),
+ #pytest.param(test_params_align_depth_color_d435, marks=pytest.mark.d435),
+ ]
+ ,indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_AlignDepthColor(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_align_depth_color(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ },
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':848,
+ 'height':480,
+ },
+ {'topic':get_node_heirarchy(params)+'/aligned_depth_to_color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ },
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ print("Starting camera test...")
+ self.init_test("RsTest"+params['camera_name'])
+ self.wait_for_node(params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+ self.set_string_param('rgb_camera.profile', '1280x720x30')
+ self.set_bool_param('enable_color', True)
+ themes[0]['width'] = 1280
+ themes[0]['height'] = 720
+ themes[2]['width'] = 1280
+ themes[2]['height'] = 720
+
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+
+test_params_all_profiles_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+test_params_all_profiles_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+test_params_all_profiles_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+
+
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D455 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+@pytest.mark.skipif (os.getenv('RS_ROS_REGRESSION', "not found") == "not found",reason="Regression is not enabled, define RS_ROS_REGRESSION")
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_all_profiles_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_all_profiles_d415, marks=pytest.mark.d415),
+ pytest.param(test_params_all_profiles_d435, marks=pytest.mark.d435),]
+ ,indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_AllAlignDepthColor(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_all_align_depth_color(self,launch_descr_with_parameters):
+ skipped_tests = []
+ failed_tests = []
+ num_passed = 0
+ num_failed = 0
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ },
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':848,
+ 'height':480,
+ },
+ {'topic':get_node_heirarchy(params)+'/aligned_depth_to_color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ },
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+params['camera_name'])
+ self.spin_for_time(wait_time=1.0)
+ cap = pytest_live_camera_utils.get_camera_capabilities(params['device_type'])
+ if cap == None:
+ debug_print("Device not found? : " + params['device_type'])
+ return
+ self.create_param_ifs(get_node_heirarchy(params))
+ color_profiles = set([i[1] for i in cap["color_profile"] if i[2] == "RGB8"])
+ depth_profiles = set([i[1] for i in cap["depth_profile"] if i[0] == "Depth"])
+ for color_profile in color_profiles:
+ for depth_profile in depth_profiles:
+ if depth_profile == color_profile:
+ continue
+ print("Testing the alignment of Depth:", depth_profile, " and Color:", color_profile)
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('align_depth.enable', False)
+
+ themes[0]['width'] = themes[2]['width'] = int(color_profile.split('x')[0])
+ themes[0]['height'] = themes[2]['height'] = int(color_profile.split('x')[1])
+ themes[1]['width'] = int(depth_profile.split('x')[0])
+ themes[1]['height'] = int(depth_profile.split('x')[1])
+ dfps = int(depth_profile.split('x')[2])
+ cfps = int(color_profile.split('x')[2])
+ if dfps > cfps:
+ fps = cfps
+ else:
+ fps = dfps
+ timeout=100.0/fps
+ #for the changes to take effect
+ self.spin_for_time(wait_time=timeout/20)
+ self.set_string_param('rgb_camera.profile', color_profile)
+ self.set_string_param('depth_module.profile', depth_profile)
+ self.set_bool_param('enable_color', True)
+ self.set_bool_param('enable_color', True)
+ self.set_bool_param('align_depth.enable', True)
+ #for the changes to take effect
+ self.spin_for_time(wait_time=timeout/20)
+ try:
+ ret = self.run_test(themes, timeout=timeout)
+ assert ret[0], ret[1]
+ assert self.process_data(themes), "".join(str(depth_profile) + " " + str(color_profile)) + " failed"
+ num_passed += 1
+ except Exception as e:
+ exc_type, exc_obj, exc_tb = sys.exc_info()
+ fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
+ print("Test failed")
+ print("Tested the alignment of Depth:", depth_profile, " and Color:", color_profile, " with timeout ", timeout)
+ print(e)
+ print(exc_type, fname, exc_tb.tb_lineno)
+ num_failed += 1
+ failed_tests.append("".join(str(depth_profile) + " " + str(color_profile)))
+
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+ print("Tests passed " + str(num_passed))
+ print("Tests skipped " + str(len(skipped_tests)))
+ if len(skipped_tests):
+ debug_print("\nSkipped tests:" + params['device_type'])
+ debug_print("\n".join(skipped_tests))
+ print("Tests failed " + str(num_failed))
+ if len(failed_tests):
+ print("\nFailed tests:" + params['device_type'])
+ print("\n".join(failed_tests))
+
+ def disable_all_params(self):
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('enable_depth', False)
+ self.set_bool_param('enable_infra', False)
+ self.set_bool_param('enable_infra1', False)
+ self.set_bool_param('enable_infra2', False)
+
diff --git a/realsense2_camera/test/live_camera/test_camera_all_profile_tests.py b/realsense2_camera/test/live_camera/test_camera_all_profile_tests.py
new file mode 100644
index 0000000000..b337e6f8d5
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_all_profile_tests.py
@@ -0,0 +1,265 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+from pytest_rs_utils import delayed_launch_descr_with_parameters
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+import pytest_live_camera_utils
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+from pytest_live_camera_utils import debug_print
+def check_if_skip_test(profile, format):
+ '''
+ if profile == 'Color':
+ if "BGRA8" == format:
+ return True
+ if "RGBA8" == format:
+ return True
+ if "Y8" == format:
+ return True
+ elif profile == 'Depth':
+ if "Z16" == format:
+ return True
+ el
+ if profile == 'Infrared':
+ if "Y8" == format:
+ return True
+ if "Y16" == format:
+ return True
+ if "BGRA8" == format:
+ return True
+ if "RGBA8" == format:
+ return True
+ if "Y10BPACK" == format:
+ return True
+ if "UYVY" == format:
+ return True
+ if "BGR8" == format:
+ return True
+ if "RGB8" == format:
+ return True
+ if "RAW10" == format:
+ return True
+ elif profile == 'Infrared1':
+ if "Y8" ==format:
+ return True
+ if "Y16" ==format:
+ return True
+ if "Y10BPACK" == format:
+ return True
+ if "UYVY" ==format:
+ return True
+ if "BGR8" ==format:
+ return True
+ if "RGB8" ==format:
+ return True
+ if "RAW10" ==format:
+ return True
+ if profile == 'Infrared2':
+ if "Y8" == format:
+ return True
+ if "Y16" == format:
+ return True
+ if "Y10BPACK" == format:
+ return True
+ if "UYVY" == format:
+ return True
+ if "BGR8" == format:
+ return True
+ if "RGB8" == format:
+ return True
+ if "RAW10" == format:
+ return True
+ '''
+ if profile == 'Infrared':
+ if "Y8" == format:
+ return True
+ if "Y16" == format:
+ return True
+ if profile == 'Infrared1':
+ if "Y8" ==format:
+ return True
+ if "Y16" ==format:
+ return True
+ if profile == 'Infrared2':
+ if "Y8" == format:
+ return True
+ if "Y16" == format:
+ return True
+ return False
+
+
+test_params_all_profiles_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ }
+test_params_all_profiles_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ }
+test_params_all_profiles_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ }
+
+
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D455 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+@pytest.mark.skipif (os.getenv('RS_ROS_REGRESSION', "not found") == "not found",reason="Regression is not enabled, define RS_ROS_REGRESSION")
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_all_profiles_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_all_profiles_d415, marks=pytest.mark.d415),
+ pytest.param(test_params_all_profiles_d435, marks=pytest.mark.d435),]
+ ,indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestLiveCamera_Change_Resolution(pytest_rs_utils.RsTestBaseClass):
+ def test_LiveCamera_Change_Resolution(self,launch_descr_with_parameters):
+ skipped_tests = []
+ failed_tests = []
+ num_passed = 0
+ num_failed = 0
+ params = launch_descr_with_parameters[1]
+ themes = [{'topic':get_node_heirarchy(params)+'/color/image_raw', 'msg_type':msg_Image,'expected_data_chunks':1}]
+ config = pytest_live_camera_utils.get_profile_config(get_node_heirarchy(params))
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+params['camera_name'])
+ self.spin_for_time(wait_time=1.0)
+ cap = pytest_live_camera_utils.get_camera_capabilities(params['device_type'])
+ if cap == None:
+ debug_print("Device not found? : " + params['device_type'])
+ return
+ self.create_param_ifs(get_node_heirarchy(params))
+ config["Color"]["default_profile1"],config["Color"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["color_profile"], "Color")
+ config["Depth"]["default_profile1"],config["Depth"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["depth_profile"], "Depth")
+ config["Infrared"]["default_profile1"],config["Infrared"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["depth_profile"], "Infrared")
+ config["Infrared1"]["default_profile1"],config["Infrared1"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["depth_profile"], "Infrared1")
+ config["Infrared2"]["default_profile1"],config["Infrared2"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["depth_profile"], "Infrared2")
+ for key in cap["color_profile"]:
+ profile_type = key[0]
+ profile = key[1]
+ format = key[2]
+ if check_if_skip_test(profile_type, format):
+ skipped_tests.append(" ".join(key))
+ continue
+ print("Testing " + " ".join(key))
+ themes[0]['topic'] = config[profile_type]['topic']
+ themes[0]['width'] = int(profile.split('x')[0])
+ themes[0]['height'] = int(profile.split('x')[1])
+ if themes[0]['width'] == int(config[profile_type]["default_profile2"].split('x')[0]):
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile1"])
+ else:
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile2"])
+ self.set_bool_param(config[profile_type]["param"], True)
+ self.disable_all_params()
+ self.spin_for_time(wait_time=0.2)
+ self.set_string_param(config[profile_type]["profile"], profile)
+ self.set_string_param(config[profile_type]["format"], format)
+ self.set_bool_param(config[profile_type]["param"], True)
+ try:
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes), " ".join(key) + " failed"
+ num_passed += 1
+ except Exception as e:
+ exc_type, exc_obj, exc_tb = sys.exc_info()
+ fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
+ print("Test failed")
+ print(e)
+ print(exc_type, fname, exc_tb.tb_lineno)
+ num_failed += 1
+ failed_tests.append(" ".join(key))
+ debug_print("Color tests completed")
+ for key in cap["depth_profile"]:
+ profile_type = key[0]
+ profile = key[1]
+ format = key[2]
+ if check_if_skip_test(profile_type, format):
+ skipped_tests.append(" ".join(key))
+ continue
+ print("Testing " + " ".join(key))
+
+ themes[0]['topic'] = config[profile_type]['topic']
+ themes[0]['width'] = int(profile.split('x')[0])
+ themes[0]['height'] = int(profile.split('x')[1])
+ if themes[0]['width'] == int(config[profile_type]["default_profile2"].split('x')[0]):
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile1"])
+ else:
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile2"])
+ self.set_bool_param(config[profile_type]["param"], True)
+
+
+ self.disable_all_params()
+ self.spin_for_time(wait_time=0.2)
+ self.set_string_param(config[profile_type]["profile"], profile)
+ self.set_string_param(config[profile_type]["format"], format)
+ self.set_bool_param(config[profile_type]["param"], True)
+ try:
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes), " ".join(key) + " failed"
+ num_passed += 1
+ except Exception as e:
+ print("Test failed")
+ print(e)
+ num_failed += 1
+ failed_tests.append(" ".join(key))
+ debug_print("Depth tests completed")
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+ print("Tests passed " + str(num_passed))
+ print("Tests skipped " + str(len(skipped_tests)))
+ if len(skipped_tests):
+ debug_print("\nSkipped tests:" + params['device_type'])
+ debug_print("\n".join(skipped_tests))
+ print("Tests failed " + str(num_failed))
+ if len(failed_tests):
+ print("\nFailed tests:" + params['device_type'])
+ print("\n".join(failed_tests))
+
+ def disable_all_params(self):
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('enable_depth', False)
+ self.set_bool_param('enable_infra', False)
+ self.set_bool_param('enable_infra1', False)
+ self.set_bool_param('enable_infra2', False)
+
diff --git a/realsense2_camera/test/live_camera/test_camera_fps_tests.py b/realsense2_camera/test/live_camera/test_camera_fps_tests.py
new file mode 100644
index 0000000000..c4b91354d4
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_fps_tests.py
@@ -0,0 +1,128 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+import pytest_live_camera_utils
+from pytest_rs_utils import launch_descr_with_parameters
+
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+test_params_test_fps_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+
+ }
+test_params_test_fps_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ }
+test_profiles ={
+ 'D455':{
+ 'depth_test_profiles':['640x480x5','640x480x15', '640x480x30', '640x480x90'],
+ 'color_test_profiles':['640x480x5','640x480x15', '640x480x30', '1280x720x30']
+ },
+ 'D415':{
+ 'depth_test_profiles':['640x480x6','640x480x15', '640x480x30', '640x480x90'],
+ 'color_test_profiles':['640x480x6','640x480x15', '640x480x30', '1280x720x30']
+ }
+}
+'''
+The test was implemented to check the fps of Depth and Color frames. The RosTopicHz had to be
+modified to make it work, see py_rs_utils for more details.
+To check the fps, a value 'expected_fps_in_hz' has to be added to the corresponding theme
+'''
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_test_fps_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_test_fps_d415, marks=pytest.mark.d415),
+ #pytest.param(test_params_test_fps_d435, marks=pytest.mark.d435),
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_TestFPS(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_test_fps(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ print("Starting camera test...")
+ self.init_test("RsTest"+params['camera_name'])
+ self.wait_for_node(params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+ #assert self.set_bool_param('enable_color', False)
+
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':100,
+ }
+ ]
+ profiles = test_profiles[params['camera_name']]['depth_test_profiles']
+ assert self.set_bool_param('enable_color', False)
+ for profile in profiles:
+ print("Testing profile: ", profile)
+ themes[0]['expected_fps_in_hz'] = float(profile.split('x')[2])
+ assert self.set_string_param('depth_module.profile', profile)
+ assert self.set_bool_param('enable_depth', True)
+ self.spin_for_time(0.5)
+ ret = self.run_test(themes, timeout=25.0)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':100,
+ }
+ ]
+ assert self.set_bool_param('enable_depth', False)
+ profiles = test_profiles[params['camera_name']]['color_test_profiles']
+ for profile in profiles:
+ print("Testing profile: ", profile)
+ themes[0]['expected_fps_in_hz'] = float(profile.split('x')[2])
+ assert self.set_string_param('rgb_camera.profile', profile)
+ assert self.set_bool_param('enable_color', True)
+ ret = self.run_test(themes, timeout=25.0)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+
+
diff --git a/realsense2_camera/test/live_camera/test_camera_imu_tests.py b/realsense2_camera/test/live_camera/test_camera_imu_tests.py
new file mode 100644
index 0000000000..384d249f47
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_imu_tests.py
@@ -0,0 +1,120 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+from pytest_rs_utils import delayed_launch_descr_with_parameters
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+import pytest_live_camera_utils
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+from pytest_live_camera_utils import debug_print
+
+test_params_all_profiles_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_accel':"True",
+ 'enable_gyro':"True",
+ 'unite_imu_method':1,
+ }
+
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_all_profiles_d455, marks=pytest.mark.d455)
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestLiveCamera_TestMotionSensor(pytest_rs_utils.RsTestBaseClass):
+ def test_LiveCamera_check_motion_sensor(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ themes = [{'topic':get_node_heirarchy(params)+'/imu', 'msg_type':msg_Imu,'expected_data_chunks':1},
+ {'topic':get_node_heirarchy(params)+'/gyro/sample', 'msg_type':msg_Imu,'expected_data_chunks':1},
+ {'topic':get_node_heirarchy(params)+'/accel/sample', 'msg_type':msg_Imu,'expected_data_chunks':1}]
+ IMU_TOPIC = 0
+ GYRO_TOPIC = 1
+ ACCEL_TOPIC = 2
+ if params['unite_imu_method'] == '0':
+ themes[IMU_TOPIC]['expected_data_chunks'] = 0
+ try:
+ #initialize
+ self.init_test("RsTest"+params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+
+
+ #run with default params and check the data
+ msg = "Test with the default params "
+ print(msg)
+ ret = self.run_test(themes, timeout=50)
+ assert ret[0], msg + str(ret[1])
+ assert self.process_data(themes), msg + " failed"
+
+ msg = "Test with the accel false "
+ self.set_integer_param('unite_imu_method', 0)
+ self.set_bool_param('enable_accel', False)
+ self.set_bool_param('enable_gyro', True)
+ themes[ACCEL_TOPIC]['expected_data_chunks'] = 0
+ themes[IMU_TOPIC]['expected_data_chunks'] = 0
+ print(msg)
+ ret = self.run_test(themes)
+ assert ret[0], msg + str(ret[1])
+ assert self.process_data(themes), msg + " failed"
+
+ msg = "Test with the gyro false "
+ self.set_bool_param('enable_accel', True)
+ self.set_bool_param('enable_gyro', False)
+ themes[IMU_TOPIC]['expected_data_chunks'] = 0
+ themes[ACCEL_TOPIC]['expected_data_chunks'] = 1
+ themes[GYRO_TOPIC]['expected_data_chunks'] = 0
+ print(msg)
+ self.spin_for_time(wait_time=1.0)
+ ret = self.run_test(themes)
+ assert ret[0], msg + str(ret[1])
+ assert self.process_data(themes), msg + " failed"
+
+ msg = "Test with both gyro and accel false "
+ self.set_bool_param('enable_accel', False)
+ self.set_bool_param('enable_gyro', False)
+ self.set_integer_param('unite_imu_method', 1)
+ self.spin_for_time(wait_time=1.0)
+ themes[ACCEL_TOPIC]['expected_data_chunks'] = 0
+ themes[GYRO_TOPIC]['expected_data_chunks'] = 0
+ themes[IMU_TOPIC]['expected_data_chunks'] = 0
+ print(msg)
+ ret = self.run_test(themes, initial_wait_time=1.0)
+ assert ret[0], msg + " failed"
+ assert self.process_data(themes), msg +" failed"
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
diff --git a/realsense2_camera/test/live_camera/test_camera_point_cloud_tests.py b/realsense2_camera/test/live_camera/test_camera_point_cloud_tests.py
new file mode 100644
index 0000000000..14def86ca9
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_point_cloud_tests.py
@@ -0,0 +1,114 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from rclpy.qos import DurabilityPolicy
+from rclpy.qos import HistoryPolicy
+from rclpy.qos import QoSProfile
+import tf2_ros
+
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+from tf2_msgs.msg import TFMessage as msg_TFMessage
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+
+import pytest_live_camera_utils
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+'''
+The test was implemented to check the tf_static before and after infra1 was enabled. There shouldn't be any
+related data before enabling.
+'''
+test_params_points_cloud_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'pointcloud.enable': 'true'
+ }
+test_params_points_cloud_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ 'pointcloud.enable': 'true'
+ }
+
+test_params_points_cloud_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'pointcloud.enable': 'true'
+ }
+
+'''
+This test was ported from rs2_test.py
+the command used to run is "python3 realsense2_camera/scripts/rs2_test.py points_cloud_1"
+'''
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_points_cloud_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_points_cloud_d435, marks=pytest.mark.d435),
+ pytest.param(test_params_points_cloud_d415, marks=pytest.mark.d415),
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_TestPointCloud(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_test_point_cloud(self,launch_descr_with_parameters):
+ self.params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(self.params['device_type']) == False:
+ print("Device not found? : " + self.params['device_type'])
+ return
+ themes = [
+ {
+ 'topic':get_node_heirarchy(self.params)+'/depth/color/points',
+ 'msg_type':msg_PointCloud2,
+ 'expected_data_chunks':5,
+ },
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+self.params['camera_name'])
+ self.wait_for_node(self.params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(self.params))
+ ret = self.run_test(themes, timeout=10)
+ assert ret[0], ret[1]
+ ret = self.process_data(themes, False)
+ assert ret[0], ret[1]
+ finally:
+ self.shutdown()
+ def process_data(self, themes, enable_infra1):
+ for count in range(self.node.get_num_chunks(get_node_heirarchy(self.params)+'/depth/color/points')):
+ data = self.node.pop_first_chunk(get_node_heirarchy(self.params)+'/depth/color/points')
+ print(data)#the frame counter starts with zero, jumps to 2 and continues. To be checked
+ return True,""
+
diff --git a/realsense2_camera/test/live_camera/test_camera_tf_tests.py b/realsense2_camera/test/live_camera/test_camera_tf_tests.py
new file mode 100644
index 0000000000..a31f74e77c
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_tf_tests.py
@@ -0,0 +1,219 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from rclpy.qos import DurabilityPolicy
+from rclpy.qos import HistoryPolicy
+from rclpy.qos import QoSProfile
+import tf2_ros
+
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+from tf2_msgs.msg import TFMessage as msg_TFMessage
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+
+import pytest_live_camera_utils
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+'''
+The test was implemented to check the tf_static before and after infra1 was enabled. There shouldn't be any
+related data before enabling.
+'''
+test_params_tf_static_change_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_infra1': 'false',
+ 'enable_infra2': 'true',
+ 'enable_accel': 'true',
+ 'enable_gyro': 'true',
+ }
+test_params_tf_static_change_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ 'enable_infra1': 'false',
+ 'enable_infra2': 'true',
+ 'enable_accel': 'true',
+ 'enable_gyro': 'true',
+ }
+
+test_params_tf_static_change_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'enable_infra1': 'false',
+ 'enable_infra2': 'true',
+ 'enable_accel': 'true',
+ 'enable_gyro': 'true',
+ }
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_tf_static_change_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_tf_static_change_d435, marks=pytest.mark.d435),
+ pytest.param(test_params_tf_static_change_d415, marks=pytest.mark.d415),
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_TestTF_Static_change(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_test_tf_static_change(self,launch_descr_with_parameters):
+ self.params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(self.params['device_type']) == False:
+ print("Device not found? : " + self.params['device_type'])
+ return
+ themes = [
+ {'topic':'/tf_static',
+ 'msg_type':msg_TFMessage,
+ 'expected_data_chunks':1,
+ 'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.TRANSIENT_LOCAL,history=HistoryPolicy.KEEP_LAST,)#static
+ }
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+self.params['camera_name'])
+ self.wait_for_node(self.params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(self.params))
+ ret = self.run_test(themes, timeout=10)
+ assert ret[0], ret[1]
+
+ ret = self.process_data(themes, False)
+ assert ret[0], ret[1]
+ assert self.set_bool_param('enable_infra1', True)
+
+ ret = self.run_test(themes, timeout=10)
+ assert ret[0], ret[1]
+ ret = self.process_data(themes, True)
+ assert ret[0], ret[1]
+ finally:
+ self.shutdown()
+ def process_data(self, themes, enable_infra1):
+ frame_ids = [self.params['camera_name']+'_link',
+ self.params['camera_name']+'_depth_frame',
+ self.params['camera_name']+'_infra2_frame',
+ self.params['camera_name']+'_color_frame']
+ if self.params['device_type'] == 'D455':
+ frame_ids.append(self.params['camera_name']+'_gyro_frame')
+ frame_ids.append(self.params['camera_name']+'_accel_frame')
+ frame_ids.append(self.params['camera_name']+'_infra1_frame')
+ data = self.node.pop_first_chunk('/tf_static')
+ coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
+ res = self.get_transform_data(data, coupled_frame_ids, is_static=True)
+ for couple in coupled_frame_ids:
+ if self.params['camera_name']+'_infra1_frame' in couple:
+ if enable_infra1 == True and res[couple] != None:
+ continue
+ if enable_infra1 == False and res[couple] == None:
+ continue
+ return False, str(couple) + ": tf_data not as expected"
+ if res[couple] == None:
+ return False, str(couple) + ": didn't get any tf data"
+ return True,""
+
+
+test_params_tf_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'publish_tf': 'true',
+ 'tf_publish_rate': '1.1',
+ }
+
+test_params_tf_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ 'publish_tf': 'true',
+ 'tf_publish_rate': '1.1',
+ }
+
+test_params_tf_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'publish_tf': 'true',
+ 'tf_publish_rate': '1.1',
+ }
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_tf_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_tf_d435, marks=pytest.mark.d435),
+ pytest.param(test_params_tf_d415, marks=pytest.mark.d415),
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_TestTF_DYN(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_test_tf_dyn(self,launch_descr_with_parameters):
+ self.params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(self.params['device_type']) == False:
+ print("Device not found? : " + self.params['device_type'])
+ return
+ themes = [
+ {'topic':'/tf',
+ 'msg_type':msg_TFMessage,
+ 'expected_data_chunks':3,
+ 'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.VOLATILE,history=HistoryPolicy.KEEP_LAST,)
+ #'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.TRANSIENT_LOCAL,history=HistoryPolicy.KEEP_LAST,)#static
+ },
+ {'topic':'/tf_static',
+ 'msg_type':msg_TFMessage,
+ 'expected_data_chunks':1,
+ #'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.VOLATILE,history=HistoryPolicy.KEEP_LAST,)
+ 'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.TRANSIENT_LOCAL,history=HistoryPolicy.KEEP_LAST,)#static
+ }
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+self.params['camera_name'])
+ self.wait_for_node(self.params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(self.params))
+ ret = self.run_test(themes, timeout=10)
+ assert ret[0], ret[1]
+ ret = self.process_data(themes, False)
+ assert ret[0], ret[1]
+ finally:
+ self.shutdown()
+
+ def process_data(self, themes, enable_infra1):
+ frame_ids = [self.params['camera_name']+'_link',
+ self.params['camera_name']+'_depth_frame',
+ self.params['camera_name']+'_color_frame']
+ data = self.node.pop_first_chunk('/tf_static')
+ ret = self.check_transform_data(data, frame_ids, True)
+ assert ret[0], ret[1]
+ data = self.node.pop_first_chunk('/tf')
+ ret = self.check_transform_data(data, frame_ids)
+ assert ret[0], ret[1]
+ data = self.node.pop_first_chunk('/tf')
+ ret = self.check_transform_data(data, frame_ids)
+ assert ret[0], ret[1]
+ data = self.node.pop_first_chunk('/tf')
+ ret = self.check_transform_data(data, frame_ids)
+ assert ret[0], ret[1]
+ #return True, ""
+
+ return True, ""
diff --git a/realsense2_camera/test/live_camera/test_d415_basic_tests.py b/realsense2_camera/test/live_camera/test_d415_basic_tests.py
new file mode 100644
index 0000000000..649f84e7bd
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_d415_basic_tests.py
@@ -0,0 +1,155 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+import pytest_live_camera_utils
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+test_params_depth_avg_1 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ }
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D415 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+@pytest.mark.d415
+@pytest.mark.parametrize("launch_descr_with_parameters", [test_params_depth_avg_1],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestD415_Change_Resolution(pytest_rs_utils.RsTestBaseClass):
+ def test_D415_Change_Resolution(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ failed_tests = []
+ num_passed = 0
+
+ num_failed = 0
+ themes = [{'topic':get_node_heirarchy(params)+'/color/image_raw', 'msg_type':msg_Image,'expected_data_chunks':1}]
+ config = pytest_live_camera_utils.get_profile_config(get_node_heirarchy(params))
+ config["Color"]["default_profile1"] = "640x480x6"
+ config["Color"]["default_profile2"] = "1280x720x6"
+ config["Depth"]["default_profile1"] = "640x480x6"
+ config["Depth"]["default_profile2"] = "1280x720x6"
+ config["Infrared"]["default_profile1"] = "640x480x6"
+ config["Infrared"]["default_profile2"] = "1280x720x6"
+ config["Infrared1"]["default_profile1"] = "640x480x6"
+ config["Infrared1"]["default_profile2"] = "1280x720x6"
+ config["Infrared2"]["default_profile1"] = "640x480x6"
+ config["Infrared2"]["default_profile2"] = "1280x720x6"
+
+ cap = [
+ #['Infrared1', '1920x1080x25', 'Y8'],
+ #['Infrared1', '1920x1080x15', 'Y16'],
+ ['Infrared', '848x100x100', 'BGRA8'],
+ ['Infrared', '848x480x60', 'RGBA8'],
+ ['Infrared', '640x480x60', 'RGBA8'],
+ ['Infrared', '640x480x60', 'BGR8'],
+ ['Infrared', '640x360x60', 'BGRA8'],
+ ['Infrared', '640x360x60', 'BGR8'],
+ ['Infrared', '640x360x30', 'UYVY'],
+ ['Infrared', '480x270x60', 'BGRA8'],
+ ['Infrared', '480x270x60', 'RGB8'],
+ ['Infrared', '424x240x60', 'BGRA8'],
+
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+params['camera_name'])
+ self.spin_for_time(wait_time=1.0)
+ self.create_param_ifs(get_node_heirarchy(params))
+
+ for key in cap:
+ profile_type = key[0]
+ profile = key[1]
+ format = key[2]
+ print("Testing " + " ".join(key))
+ themes[0]['topic'] = config[profile_type]['topic']
+ themes[0]['width'] = int(profile.split('x')[0])
+ themes[0]['height'] = int(profile.split('x')[1])
+ #'''
+ if themes[0]['width'] == int(config[profile_type]["default_profile2"].split('x')[0]):
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile1"])
+ else:
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile2"])
+ self.set_bool_param(config[profile_type]["param"], True)
+ self.disable_all_params()
+ #self.set_string_param("depth_profile", "640x480x6")
+ #self.set_bool_param("enable_depth", True)
+ #'''
+ self.spin_for_time(wait_time=0.2)
+ self.set_string_param(config[profile_type]["profile"], profile)
+ self.set_string_param(config[profile_type]["format"], format)
+ self.set_bool_param(config[profile_type]["param"], True)
+ try:
+ ret = self.run_test(themes, timeout=5.0)
+ assert ret[0], ret[1]
+ assert self.process_data(themes), " ".join(key) + " failed"
+ num_passed += 1
+ except Exception as e:
+ exc_type, exc_obj, exc_tb = sys.exc_info()
+ fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
+ print("Test failed")
+ print(e)
+ print(exc_type, fname, exc_tb.tb_lineno)
+ num_failed += 1
+ failed_tests.append(" ".join(key))
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+ if num_failed != 0:
+ print("Failed tests:")
+ print("\n".join(failed_tests))
+ assert False, " Tests failed"
+
+
+ def disable_all_params(self):
+ '''
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('enable_depth', False)
+ self.set_bool_param('enable_infra', False)
+ self.set_bool_param('enable_infra1', False)
+ self.set_bool_param('enable_infra2', False)
+ '''
+ pass
+
diff --git a/realsense2_camera/test/live_camera/test_d455_basic_tests.py b/realsense2_camera/test/live_camera/test_d455_basic_tests.py
new file mode 100644
index 0000000000..7848ff4986
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_d455_basic_tests.py
@@ -0,0 +1,99 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+import pytest_live_camera_utils
+from pytest_rs_utils import launch_descr_with_parameters
+
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+test_params_depth_avg_1 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ }
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D455 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+@pytest.mark.d455
+@pytest.mark.parametrize("launch_descr_with_parameters", [test_params_depth_avg_1],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestD455_Change_Resolution(pytest_rs_utils.RsTestBaseClass):
+ def test_D455_Change_Resolution(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ #'data':data
+ }
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ print("Starting camera test...")
+ self.init_test("RsTest"+params['camera_name'])
+ self.wait_for_node(params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+ #assert self.set_bool_param('enable_color', False)
+ assert self.set_string_param('rgb_camera.profile', '640x480x30')
+ assert self.set_bool_param('enable_color', True)
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+ self.set_string_param('rgb_camera.profile', '1280x800x5')
+ self.set_bool_param('enable_color', True)
+ themes[0]['width'] = 1280
+ themes[0]['height'] = 800
+
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+
+
diff --git a/realsense2_camera/test/rosbag/test_rosbag_all_topics_test.py b/realsense2_camera/test/rosbag/test_rosbag_all_topics_test.py
index 34abbc3039..9cda836f3a 100644
--- a/realsense2_camera/test/rosbag/test_rosbag_all_topics_test.py
+++ b/realsense2_camera/test/rosbag/test_rosbag_all_topics_test.py
@@ -44,21 +44,22 @@
test_params_all_topics = {"rosbag_filename":get_rosbag_file_path("outdoors_1color.bag"),
'camera_name': 'AllTopics',
- 'color_width': '0',
- 'color_height': '0',
- 'depth_width': '0',
- 'depth_height': '0',
- 'infra_width': '0',
- 'infra_height': '0',
'enable_infra1':'true',
'enable_infra2':'true',
- #'align_depth.enable':'true',
+ 'align_depth.enable':'true',
}
'''
To test all topics published
'''
-@pytest.mark.skip
+'''
+To test all topics published
+
+The test doesn't work in CI due to sync. The unlike the live camera, rosbag node publishes the extrinsics only once,
+not every time the subscription is created. The CI due to limited resource can start the ros node much earlier than
+the testcase, hence missing the published data by the test
+'''
@pytest.mark.rosbag
+@pytest.mark.skipif (os.getenv('RS_ROS_REGRESSION', "not found") == "not found",reason="The test doesn't work in CI")
@pytest.mark.parametrize("delayed_launch_descr_with_parameters", [test_params_all_topics],indirect=True)
@pytest.mark.launch(fixture=delayed_launch_descr_with_parameters)
class TestAllTopics(pytest_rs_utils.RsTestBaseClass):
diff --git a/realsense2_camera/test/rosbag/test_rosbag_depth_tests.py b/realsense2_camera/test/rosbag/test_rosbag_depth_tests.py
index abbfe28cab..51d574da85 100644
--- a/realsense2_camera/test/rosbag/test_rosbag_depth_tests.py
+++ b/realsense2_camera/test/rosbag/test_rosbag_depth_tests.py
@@ -20,10 +20,14 @@
import pytest
import rclpy
+from rclpy.qos import DurabilityPolicy
+from rclpy.qos import HistoryPolicy
+from rclpy.qos import QoSProfile
from sensor_msgs.msg import Image as msg_Image
from sensor_msgs.msg import Imu as msg_Imu
from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+from tf2_msgs.msg import TFMessage as msg_TFMessage
import numpy as np
@@ -78,12 +82,6 @@ def test_depth_points_cloud_1(self,delayed_launch_descr_with_parameters):
'expected_data_chunks':1,
'data':data1
},
- {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
- 'msg_type':msg_Image,
- 'expected_data_chunks':1,
- 'data':data2
- }
-
]
try:
'''
@@ -100,12 +98,10 @@ def process_data(self, themes):
test_params_static_tf_1 = {"rosbag_filename":get_rosbag_file_path("outdoors_1color.bag"),
- 'camera_name': 'Static_tf_1',
+ 'camera_name': 'Static_tf1',
'color_width': '0',
'color_height': '0',
- 'depth_width': '0',
- 'depth_height': '0',
- 'infra_width': '0',
+ "static_tf":True,
'infra_height': '0',
'enable_infra1':'true',
'enable_infra2':'true'
@@ -119,43 +115,45 @@ def process_data(self, themes):
@pytest.mark.launch(fixture=delayed_launch_descr_with_parameters)
class TestStaticTf1(pytest_rs_utils.RsTestBaseClass):
def test_static_tf_1(self,delayed_launch_descr_with_parameters):
- params = delayed_launch_descr_with_parameters[1]
- self.rosbag = params["rosbag_filename"]
- data = {('camera_link', 'camera_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702]),
- ('camera_link', 'camera_depth_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
- ('camera_link', 'camera_infra1_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
- ('camera_depth_frame', 'camera_infra1_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
- ('camera_depth_frame', 'camera_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702]),
- ('camera_infra1_frame', 'camera_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702])}
+ self.params = delayed_launch_descr_with_parameters[1]
+ self.rosbag = self.params["rosbag_filename"]
themes = [
- {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ {'topic':get_node_heirarchy(self.params)+'/color/image_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
- 'data':data,
+ 'static_tf':True,
+ },
+ {'topic':'/tf_static',
+ 'msg_type':msg_TFMessage,
+ 'expected_data_chunks':1,
+ 'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.TRANSIENT_LOCAL,history=HistoryPolicy.KEEP_LAST,)#static
}
]
try:
'''
initialize, run and check the data
'''
- self.init_test("RsTest"+params['camera_name'])
+ self.init_test("RsTest"+self.params['camera_name'])
ret = self.run_test(themes)
assert ret[0], ret[1]
assert self.process_data(themes)
finally:
self.shutdown()
def process_data(self, themes):
- #print ('Gathering static transforms')
- frame_ids = ['camera_link', 'camera_depth_frame', 'camera_infra1_frame', 'camera_infra2_frame', 'camera_color_frame', 'camera_fisheye_frame', 'camera_pose']
+ expected_data = {(self.params['camera_name']+'_link', self.params['camera_name']+'_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702]),
+ (self.params['camera_name']+'_link', self.params['camera_name']+'_depth_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
+ (self.params['camera_name']+'_link', self.params['camera_name']+'_infra1_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
+ (self.params['camera_name']+'_depth_frame', self.params['camera_name']+'_infra1_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
+ (self.params['camera_name']+'_depth_frame', self.params['camera_name']+'_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702]),
+ (self.params['camera_name']+'_infra1_frame', self.params['camera_name']+'_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702])}
+ frame_ids = [self.params['camera_name']+'_link', self.params['camera_name']+'_depth_frame', self.params['camera_name']+'_infra1_frame', self.params['camera_name']+'_infra2_frame', self.params['camera_name']+'_color_frame', self.params['camera_name']+'_fisheye_frame', self.params['camera_name']+'_pose']
+ coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
+ data = self.node.pop_first_chunk('/tf_static')
coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
- res = {}
- for couple in coupled_frame_ids:
- from_id, to_id = couple
- if (self.node.tfBuffer.can_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=3e6))):
- res[couple] = self.node.tfBuffer.lookup_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=1e6)).transform
- else:
- res[couple] = None
- return pytest_rs_utils.staticTFTest(res, themes[0]["data"])
+ tfs_data = self.get_transform_data(data, coupled_frame_ids, is_static=True)
+ ret = pytest_rs_utils.staticTFTest(tfs_data, expected_data)
+ assert ret[0], ret[1]
+ return ret[0]
test_params_non_existing_rosbag = {"rosbag_filename":"non_existent.bag",
diff --git a/realsense2_camera/test/utils/pytest_live_camera_utils.py b/realsense2_camera/test/utils/pytest_live_camera_utils.py
new file mode 100644
index 0000000000..9b5bfeac70
--- /dev/null
+++ b/realsense2_camera/test/utils/pytest_live_camera_utils.py
@@ -0,0 +1,207 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+import time
+import ctypes
+import struct
+import requests
+import json
+
+from pytest_rs_utils import debug_print
+
+
+def get_profile_config(camera_name):
+ config = {
+ "Color":{"profile":"rgb_camera.profile", "format":'rgb_camera.color_format', "param":"enable_color", "topic":camera_name+'/color/image_raw',},
+ "Depth":{"profile":"depth_module.profile", "format":'depth_module.depth_format', "param":"enable_depth", 'topic':camera_name+'/depth/image_rect_raw'},
+ "Infrared":{"profile":"depth_module.profile", "format":'depth_module.infra_format', "param":"enable_infra", 'topic':camera_name+'/infra/image_rect_raw'},
+ "Infrared1":{"profile":"depth_module.profile", "format":'depth_module.infra1_format',"param":"enable_infra1", 'topic':camera_name+'/infra/image_rect_raw'},
+ "Infrared2":{"profile":"depth_module.profile", "format":'depth_module.infra2_format',"param":"enable_infra2", 'topic':camera_name+'/infra/image_rect_raw'},
+ }
+ return config
+
+
+def get_default_profiles(cap, profile):
+ profile1 = ""
+ profile2 = ""
+ for profiles in cap:
+ if profiles[0] == profile and int(profiles[1].split('x')[0]) != 640:
+ profile1 = profiles[1]
+ break
+ for profiles in cap:
+ if profiles[0] == profile and int(profiles[1].split('x')[0]) != int(profile1.split('x')[0]):
+ profile2 = profiles[1]
+ break
+ debug_print(profile + " default profile1:" + profile1)
+ debug_print(profile + " default profile2:" + profile2)
+ return profile1,profile2
+
+
+
+device_info_string = "Device info:"
+
+def get_device_info_location(long_data, index=0):
+ for line_no in range(index, len(long_data)):
+ if device_info_string in long_data[line_no]:
+ return line_no
+ return len(long_data)
+
+stream_profile_string = "Stream Profiles supported by"
+def get_stream_profile_location(long_data, start_index, end_index, profile_string):
+ for line_no in range(start_index, end_index):
+ if stream_profile_string in long_data[line_no]:
+ if profile_string in long_data[line_no]:
+ return line_no
+ return None
+
+def get_depth_profiles(long_data, start_index, end_index):
+ cap = []
+ for line_no in range(start_index, end_index):
+ if len(long_data[line_no]) == 0:
+ break
+ debug_print("depth profile processing:" + long_data[line_no])
+ depth_profile = long_data[line_no].split()
+ if len(depth_profile) == 5:
+ profile = depth_profile[0]
+ value = depth_profile[1]+"x"+depth_profile[3]
+ format = depth_profile[4]
+ elif len(depth_profile) == 6:
+ profile = depth_profile[0]+depth_profile[1]
+ value = depth_profile[2]+"x"+depth_profile[4]
+ format = depth_profile[5]
+ else:
+ assert false, "Seems that the depth profile info format printed by rs-enumerate-devices changed"
+ value = value[:-2]
+ debug_print("depth profile added: " + profile, value, format)
+ cap.append([profile, value, format])
+ debug_print(cap)
+ return cap
+
+
+def get_color_profiles(long_data, start_index, end_index):
+ cap = []
+ for line_no in range(start_index, end_index):
+ if len(long_data[line_no]) == 0:
+ break
+ debug_print("color profile processing:" + long_data[line_no])
+ color_profile = long_data[line_no].split()
+ if len(color_profile) == 5:
+ profile = color_profile[0]
+ value = color_profile[1]+"x"+color_profile[3]
+ format = color_profile[4]
+ else:
+ assert false, "Seems that the color profile info format printed by rs-enumerate-devices changed"
+ value = value[:-2]
+ debug_print("color profile added: " + profile, value, format)
+ cap.append([profile, value, format])
+ debug_print(cap)
+ return cap
+
+NAME_LINE_INDEX = 1
+NAME_LINE_NAME_OFFSET = 4
+SERIAL_NO_LINE_INDEX = 2
+SERIAL_NO_VALUE_OFFSET = 3
+def parse_device_info(long_data, start_index, end_index, device_type, serial_no):
+ #after device_info, the next line should have the name and device type
+ capability = {}
+ debug_print("Searching for data between lines ", str(start_index) + " and " + str(end_index))
+ name_line = long_data[start_index+NAME_LINE_INDEX].split()
+ if name_line[0] != "Name":
+ assert False, "rs-enumerate-devices output format changed"
+ if name_line[4] != device_type:
+ debug_print("device not matching:" + name_line[NAME_LINE_NAME_OFFSET])
+ return None
+ debug_print("device matched:" + name_line[NAME_LINE_NAME_OFFSET])
+ if serial_no != None:
+ #next line after nameline should have the serial_no
+ serial_no_line = long_data[start_index+SERIAL_NO_LINE_INDEX].split()
+ if serial_no_line[0] != "Serial":
+ assert False, "rs-enumerate-devices output format changed"
+ if serial_no_line[SERIAL_NO_VALUE_OFFSET] != serial_no:
+ debug_print("serial_no not matching:" + serial_no_line[SERIAL_NO_VALUE_OFFSET])
+ return None
+ debug_print("serial_no matched:" + serial_no_line[SERIAL_NO_VALUE_OFFSET])
+ else:
+ serial_no = long_data[start_index+SERIAL_NO_LINE_INDEX].split()[SERIAL_NO_VALUE_OFFSET]
+
+ capability["device_type"] = device_type
+ capability["serial_no"] = serial_no
+ depth_profile_index = get_stream_profile_location(long_data, start_index, end_index, "Stereo Module")
+ if depth_profile_index != None:
+ capability["depth_profile"] = get_depth_profiles(long_data, depth_profile_index+3, end_index)
+ rgb_profile_index = get_stream_profile_location(long_data, start_index, end_index, "RGB Camera")
+ if rgb_profile_index != None:
+ capability["color_profile"] = get_color_profiles(long_data, rgb_profile_index+3, end_index)
+ return capability
+
+def get_camera_capabilities(device_type, serial_no=None):
+ long_data = os.popen("rs-enumerate-devices").read().splitlines()
+ debug_print(serial_no)
+ index = 0
+ while index < len(long_data):
+ index = get_device_info_location(long_data, index)
+ if index == len(long_data):
+ return
+ else:
+ debug_print("DeviceInfo found at: " + str(index))
+ start_index = index
+ index += 1
+ index = get_device_info_location(long_data, index)
+ capability = parse_device_info(long_data, start_index, index, device_type, serial_no)
+ if capability != None:
+ return capability
+ return None
+
+def get_camera_capabilities_short(device_type, serial_no=None):
+ short_data = os.popen("rs-enumerate-devices -s").read().splitlines()
+ print(serial_no)
+ for line in short_data:
+ print(line)
+ if device_type in line:
+ if serial_no is None or serial_no == "" :
+ print(device_type+ " found in " + line)
+ return
+ if serial_no in line:
+ print(device_type + " with serial_no " + serial_no +" found in " + line)
+ return
+ print(device_type + " not found")
+
+def check_if_camera_connected(device_type, serial_no=None):
+ long_data = os.popen("rs-enumerate-devices -s").read().splitlines()
+ debug_print(serial_no)
+ index = 0
+ for index in range(len(long_data)):
+ name_line = long_data[index].split()
+ if name_line[0] != "Intel":
+ continue
+ if name_line[2] != device_type:
+ continue
+ if serial_no == None:
+ return True
+ if serial_no == name_line[3]:
+ return True
+
+ return False
+
+if __name__ == '__main__':
+ device_type = 'D455'
+ serial_no = None
+ if len(sys.argv) > 1:
+ device_type = sys.argv[1]
+ if len(sys.argv) > 2:
+ serial_no = sys.argv[2]
+ cap = get_camera_capabilities(device_type, serial_no)
+ print("Capabilities:")
+ print(cap)
\ No newline at end of file
diff --git a/realsense2_camera/test/utils/pytest_rs_utils.py b/realsense2_camera/test/utils/pytest_rs_utils.py
index 63ede24d57..db9c09c3c0 100644
--- a/realsense2_camera/test/utils/pytest_rs_utils.py
+++ b/realsense2_camera/test/utils/pytest_rs_utils.py
@@ -15,8 +15,13 @@
import sys
import time
from collections import deque
+import functools
+import itertools
+import subprocess
+
import pytest
+
import numpy as np
from launch import LaunchDescription
@@ -28,10 +33,31 @@
from rclpy import qos
from rclpy.node import Node
from rclpy.utilities import ok
+from ros2topic.verb.bw import ROSTopicBandwidth
+from ros2topic.verb.hz import ROSTopicHz
+from ros2topic.api import get_msg_class
import ctypes
import struct
import requests
+import math
+
+#from rclpy.parameter import Parameter
+from rcl_interfaces.msg import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+'''
+humble doesn't have the SetParametersResult and SetParameters_Response imported using
+__init__.py. The below two lines can be used for iron and hopefully succeeding ROS2 versions
+'''
+#from rcl_interfaces.msg import SetParametersResult
+#from rcl_interfaces.srv import SetParameters_Response
+from rcl_interfaces.msg._set_parameters_result import SetParametersResult
+from rcl_interfaces.srv._set_parameters import SetParameters_Response
+
+from rcl_interfaces.msg import ParameterType
+from rcl_interfaces.msg import ParameterValue
+
from sensor_msgs.msg import Image as msg_Image
@@ -54,6 +80,11 @@
import tempfile
import os
import requests
+
+def debug_print(*args):
+ if(False):
+ print(*args)
+
class RosbagManager(object):
def __new__(cls):
if not hasattr(cls, 'instance'):
@@ -82,12 +113,13 @@ def init(self):
def get_rosbag_path(self, filename):
if filename in self.rosbag_files:
return self.rosbag_location + "/" + filename
-rosbagMgr = RosbagManager()
+
def get_rosbag_file_path(filename):
+ rosbagMgr = RosbagManager()
path = rosbagMgr.get_rosbag_path(filename)
assert path, "No rosbag file found :"+filename
return path
-
+get_rosbag_file_path.rosbagMgr = None
def CameraInfoGetData(rec_filename, topic):
data = importRosbag(rec_filename, importTopics=[topic], log='ERROR', disable_bar=True)[topic]
@@ -412,6 +444,17 @@ def kill_realsense2_camera_node():
os.system(cmd)
pass
+'''
+function gets all the topics for a camera node
+'''
+
+def get_all_topics(camera_name=None):
+ cmd = 'ros2 topic list'
+ if camera_name!=None:
+ cmd += '| grep ' + camera_name
+ direct_output = os.popen(cmd).read()
+ return direct_output
+
'''
get the default parameters from the launch script so that the test doesn't have to
get updated for each change to the parameter or default values
@@ -480,7 +523,7 @@ def get_rs_node_description(params):
#name=LaunchConfiguration("camera_name"),
namespace=params["camera_namespace"],
name=params["camera_name"],
- #prefix=['xterm -e gdb --args'],
+ #prefix=['xterm -e gdb -ex=run --args'],
executable='realsense2_camera_node',
parameters=[tmp_yaml.name],
output='screen',
@@ -552,6 +595,16 @@ def delayed_launch_descr_with_parameters(request):
actions = [
first_node,], period=period)
]),params
+class HzWrapper(ROSTopicHz):
+ def _callback_hz(self, m, topic=None):
+ if self.get_last_printed_tn(topic=topic) == 0:
+ self.set_last_printed_tn(self.get_msg_tn(topic=topic), topic=topic)
+ return self.callback_hz(m, topic)
+ def restart_topic(self, topic):
+ self._last_printed_tn[topic] = 0
+ self._times[topic].clear()
+ self._msg_tn[topic] = 0
+ self._msg_t0[topic] = -1
'''
This is that holds the test node that listens to a subscription created by a test.
@@ -564,6 +617,7 @@ def __init__(self, name='test_node'):
self.data = {}
self.tfBuffer = None
self.frame_counter = {}
+ self._ros_topic_hz = HzWrapper(super(), 10000, use_wtime=False)
def wait_for_node(self, node_name, timeout=8.0):
start = time.time()
@@ -576,14 +630,23 @@ def wait_for_node(self, node_name, timeout=8.0):
return True, ""
time.sleep(timeout/5)
return False, "Timed out waiting for "+ str(timeout)+ "seconds"
- def create_subscription(self, msg_type, topic , data_type, store_raw_data):
- super().create_subscription(msg_type, topic , self.rsCallback(topic,msg_type, store_raw_data), data_type)
+ def reset_data(self, topic):
self.data[topic] = deque()
self.frame_counter[topic] = 0
- if (os.getenv('ROS_DISTRO') != "dashing") and (self.tfBuffer == None):
+ self._ros_topic_hz.restart_topic(topic)
+
+ def create_subscription(self, msg_type, topic , data_type, store_raw_data, measure_hz):
+ self.reset_data(topic)
+ super().create_subscription(msg_type, topic , self.rsCallback(topic,msg_type, store_raw_data), data_type)
+ #hz measurements are not working
+ if measure_hz == True:
+ msg_class = get_msg_class(super(), topic, blocking=True, include_hidden_topics=True)
+ super().create_subscription(msg_class,topic,functools.partial(self._ros_topic_hz._callback_hz, topic=topic),data_type)
+ self._ros_topic_hz.set_last_printed_tn(0, topic=topic)
+
+ if self.tfBuffer == None:
self.tfBuffer = tf2_ros.Buffer()
self.tf_listener = tf2_ros.TransformListener(self.tfBuffer, super())
-
def get_num_chunks(self,topic):
return len(self.data[topic])
@@ -592,14 +655,14 @@ def pop_first_chunk(self, topic):
def image_msg_to_numpy(self, data):
fmtString = data.encoding
- if fmtString in ['mono8', '8UC1', 'bgr8', 'rgb8', 'bgra8', 'rgba8']:
+ if fmtString in ['mono8', '8UC1', 'bgr8', 'rgb8', 'bgra8', 'rgba8', 'yuv422_yuy2', 'yuv422']:
img = np.frombuffer(data.data, np.uint8)
elif fmtString in ['mono16', '16UC1', '16SC1']:
img = np.frombuffer(data.data, np.uint16)
elif fmtString == '32FC1':
img = np.frombuffer(data.data, np.float32)
else:
- print('image format not supported:' + fmtString)
+ print('py_rs_utils.image_msg_to_numpy:image format not supported:' + fmtString)
return None
depth = data.step / (data.width * img.dtype.itemsize)
@@ -613,9 +676,12 @@ def image_msg_to_numpy(self, data):
The processing of data is taken from the existing testcase in scripts rs2_test
'''
def rsCallback(self, topic, msg_type, store_raw_data):
- print("RSCallback")
+ debug_print("RSCallback")
def _rsCallback(data):
- print('Got the callback for ' + topic)
+ '''
+ enabling prints in callback reduces the fps in some cases
+ '''
+ debug_print('Got the callback for ' + topic)
#print(data.header)
self.flag = True
if store_raw_data == True:
@@ -707,23 +773,78 @@ def init_test(self,name='RsTestNode'):
self.flag = False
self.node = RsTestNode(name)
self.subscribed_topics = []
- def create_subscription(self, msg_type, topic, data_type, store_raw_data=False):
+
+ def wait_for_node(self, node_name, timeout=8.0):
+ self.node.wait_for_node(node_name, timeout)
+ def create_subscription(self, msg_type, topic, data_type, store_raw_data=False, measure_hz=False):
if not topic in self.subscribed_topics:
- self.node.create_subscription(msg_type, topic, data_type, store_raw_data)
+ self.node.create_subscription(msg_type, topic, data_type, store_raw_data, measure_hz)
self.subscribed_topics.append(topic)
+ else:
+ self.node.reset_data(topic)
- def spin_for_data(self,themes):
- start = time.time()
+
+
+ def create_param_ifs(self, camera_name):
+ self.set_param_if = self.node.create_client(SetParameters, camera_name + '/set_parameters')
+ self.get_param_if = self.node.create_client(GetParameters, camera_name + '/get_parameters')
+ while not self.get_param_if.wait_for_service(timeout_sec=1.0):
+ print('service not available, waiting again...')
+ while not self.set_param_if.wait_for_service(timeout_sec=1.0):
+ print('service not available, waiting again...')
+
+ def send_param(self, req):
+ future = self.set_param_if.call_async(req)
+ while rclpy.ok():
+ rclpy.spin_once(self.node)
+ if future.done():
+ try:
+ response = future.result()
+ if response.results[0].successful:
+ return True
+ except Exception as e:
+ print("exception raised:")
+ print(e)
+ pass
+ return False
+
+ def set_string_param(self, param_name, param_value):
+ req = SetParameters.Request()
+ new_param_value = ParameterValue(type=ParameterType.PARAMETER_STRING, string_value=param_value)
+ req.parameters = [Parameter(name=param_name, value=new_param_value)]
+ return self.send_param(req)
+
+ def set_bool_param(self, param_name, param_value):
+ req = SetParameters.Request()
+ new_param_value = ParameterValue(type=ParameterType.PARAMETER_BOOL, bool_value=param_value)
+ req.parameters = [Parameter(name=param_name, value=new_param_value)]
+ return self.send_param(req)
+
+ def set_integer_param(self, param_name, param_value):
+ req = SetParameters.Request()
+ new_param_value = ParameterValue(type=ParameterType.PARAMETER_INTEGER, integer_value=param_value)
+ req.parameters = [Parameter(name=param_name, value=new_param_value)]
+ return self.send_param(req)
+
+ def spin_for_data(self,themes, timeout=5.0):
'''
timeout value varies depending upon the system, it needs to be more if
the access is over the network
'''
- timeout = 25.0
print('Waiting for topic... ' )
flag = False
+ data_not_expected1 = [i for i in themes if (i["expected_data_chunks"]) == 0]
+ if data_not_expected1 == []:
+ data_not_expected = False
+ else:
+ data_not_expected = True
+ start = time.time()
+ msg = ""
while (time.time() - start) < timeout:
rclpy.spin_once(self.node, timeout_sec=1)
- print('Spun once... ' )
+ debug_print('Spun once... ' )
+ if data_not_expected == True:
+ continue
all_found = True
for theme in themes:
if theme['expected_data_chunks'] > int(self.node.get_num_chunks(theme['topic'])):
@@ -733,27 +854,51 @@ def spin_for_data(self,themes):
flag =True
break
else:
- print("Timed out waiting for", timeout, "seconds" )
- return False, "run_test timedout"
- return flag,""
+ if data_not_expected == False:
+ msg = "Timedout: Data expected, but not received for: "
+ for theme in themes:
+ if theme['expected_data_chunks'] > int(self.node.get_num_chunks(theme['topic'])):
+ msg += " " + theme['topic']
+ msg += " Nodes available: " + str(self.node.get_node_names())
+ return False, msg
+ flag = True
+ return flag,msg
def spin_for_time(self,wait_time):
start = time.time()
- print('Waiting for topic... ' )
+ print('Waiting for time... ' )
flag = False
- while time.time() - start < wait_time:
- rclpy.spin_once(self.node)
- print('Spun once... ' )
+ while (time.time() - start) < wait_time:
+ print('Spun for time once... ' )
+ rclpy.spin_once(self.node, timeout_sec=wait_time)
- def run_test(self, themes):
+ def run_test(self, themes, initial_wait_time=0.0, timeout=0):
try:
for theme in themes:
store_raw_data = False
if 'store_raw_data' in theme:
store_raw_data = theme['store_raw_data']
- self.create_subscription(theme['msg_type'], theme['topic'] , qos.qos_profile_sensor_data,store_raw_data)
+ if 'expected_fps_in_hz' in theme:
+ measure_hz = True
+ else:
+ measure_hz = False
+ qos_type = qos.qos_profile_sensor_data
+ if 'qos' in theme:
+ qos_type = theme['qos']
+ self.create_subscription(theme['msg_type'], theme['topic'] , qos_type,store_raw_data, measure_hz)
print('subscription created for ' + theme['topic'])
- self.flag = self.spin_for_data(themes)
+ '''
+ change the default based on whether data is expected or not
+ '''
+ if timeout == 0:
+ timeout = 5.0
+ data_not_expected1 = [i for i in themes if (i["expected_data_chunks"]) == 0]
+ if data_not_expected1 == []:
+ timeout = 50.0 #high value due to resource constraints in CI
+
+ if initial_wait_time != 0.0:
+ self.spin_for_time(initial_wait_time)
+ self.flag = self.spin_for_data(themes, timeout)
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
@@ -764,8 +909,31 @@ def run_test(self, themes):
else:
print(e)
self.flag =False,e
-
return self.flag
+
+ def get_transform_data(self, data, coupled_frame_ids, is_static=False):
+ tfBuffer = tf2_ros.Buffer()
+ for transform in data.transforms:
+ if is_static:
+ tfBuffer.set_transform_static(transform, "default_authority")
+ else:
+ tfBuffer.set_transform(transform, "default_authority")
+ res = dict()
+ for couple in coupled_frame_ids:
+ from_id, to_id = couple
+ if (tfBuffer.can_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=3e6))):
+ res[couple] = tfBuffer.lookup_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=1e6)).transform
+ else:
+ res[couple] = None
+ return res
+ def check_transform_data(self, data, frame_ids, is_static=False):
+ coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
+ res = self.get_transform_data(data, coupled_frame_ids, is_static)
+ for couple in coupled_frame_ids:
+ if res[couple] == None:
+ return False, str(couple) + ": didn't get any tf data"
+ return True,""
+
'''
Please override and use your own process_data if the default check is not suitable.
Please also store_raw_data parameter in the themes as True, if you want the
@@ -773,11 +941,25 @@ def run_test(self, themes):
'''
def process_data(self, themes):
for theme in themes:
+ if theme['expected_data_chunks'] == 0:
+ assert self.node.get_num_chunks(theme['topic']) == 0, "Received data, when not expected for topic:" + theme['topic']
+ continue #no more checks needed if data is not available
+
+ if 'expected_fps_in_hz' in theme:
+ hz = self.node._ros_topic_hz.get_hz(theme['topic'])
+ if hz == None:
+ print("Couldn't measure fps, no of data frames expected are enough for the measurement?")
+ else:
+ speed= 1e9*hz[0]
+ msg = "FPS in Hz of topic " + theme['topic'] + " is " + str(speed) + ". Expected is " + str(theme['expected_fps_in_hz'])
+ print(msg)
+ if (abs(theme['expected_fps_in_hz']-speed) > theme['expected_fps_in_hz']/10):
+ assert False,msg
data = self.node.pop_first_chunk(theme['topic'])
if 'width' in theme:
- assert theme['width'] == data['shape'][0][1] # (get from numpy image the width)
+ assert theme['width'] == data['shape'][0][1], "Width not matched. Expected:" + str(theme['width']) + " & got: " + str(data['shape'][0][1]) # (get from numpy image the width)
if 'height' in theme:
- assert theme['height'] == data['shape'][0][0] # (get from numpy image the height)
+ assert theme['height'] == data['shape'][0][0], "Height not matched. Expected:" + str(theme['height']) + " & got: " + str(data['shape'][0][0]) # (get from numpy image the height)
if 'data' not in theme:
print('No data to compare for ' + theme['topic'])
#print(data)
diff --git a/realsense2_camera/tools/frame_latency/frame_latency.cpp b/realsense2_camera/tools/frame_latency/frame_latency.cpp
index 022e733e49..7f4ebb0da7 100644
--- a/realsense2_camera/tools/frame_latency/frame_latency.cpp
+++ b/realsense2_camera/tools/frame_latency/frame_latency.cpp
@@ -12,11 +12,25 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+// DESCRIPTION: #
+// ------------ #
+// This tool created a node which can be used to calulate the specified topic's latency.
+// Input parameters:
+// - topic_name :
+// - topic to which latency need to be calculated
+// - topic_type :
+// - Message type of the topic.
+// - Valid inputs: {'image','points','imu','metadata','camera_info','rgbd','imu_info','tf'}
+// Note:
+// - This tool doesn't support calulating latency for extrinsic topics.
+// Because, those topics doesn't have timestamp in it and this tool uses
+// that timestamp as an input to calculate the latency.
+//
+
#include
#include
#include
#include
-// Node which receives sensor_msgs/Image messages and prints the image latency.
using namespace rs2_ros::tools::frame_latency;
@@ -28,6 +42,39 @@ FrameLatencyNode::FrameLatencyNode( const std::string & node_name,
{
}
+std::string topic_name = "/camera/color/image_raw";
+std::string topic_type = "image";
+
+template
+void FrameLatencyNode::createListener(std::string topicName, const rmw_qos_profile_t qos_profile)
+{
+ _sub = this->create_subscription(
+ topicName,
+ rclcpp::QoS( rclcpp::QoSInitialization::from_rmw( qos_profile ),
+ qos_profile ),
+ [&, this]( const std::shared_ptr< MsgType> msg ) {
+ rclcpp::Time curr_time = this->get_clock()->now();
+ auto latency = ( curr_time - msg->header.stamp ).seconds();
+ ROS_INFO_STREAM( "Got msg with "<< msg->header.frame_id <<" frame id at address 0x"
+ << std::hex << reinterpret_cast< std::uintptr_t >( msg.get() )
+ << std::dec << " with latency of " << latency << " [sec]" );
+ } );
+}
+
+void FrameLatencyNode::createTFListener(std::string topicName, const rmw_qos_profile_t qos_profile)
+{
+ _sub = this->create_subscription(
+ topicName,
+ rclcpp::QoS( rclcpp::QoSInitialization::from_rmw( qos_profile ),
+ qos_profile ),
+ [&, this]( const std::shared_ptr msg ) {
+ rclcpp::Time curr_time = this->get_clock()->now();
+ auto latency = ( curr_time - msg->transforms.back().header.stamp ).seconds();
+ ROS_INFO_STREAM( "Got msg with "<< msg->transforms.back().header.frame_id <<" frame id at address 0x"
+ << std::hex << reinterpret_cast< std::uintptr_t >( msg.get() )
+ << std::dec << " with latency of " << latency << " [sec]" );
+ } );
+}
FrameLatencyNode::FrameLatencyNode( const rclcpp::NodeOptions & node_options )
: Node( "frame_latency", "/", node_options )
@@ -36,19 +83,30 @@ FrameLatencyNode::FrameLatencyNode( const rclcpp::NodeOptions & node_options )
ROS_INFO_STREAM( "frame_latency node is UP!" );
ROS_INFO_STREAM( "Intra-Process is "
<< ( this->get_node_options().use_intra_process_comms() ? "ON" : "OFF" ) );
- // Create a subscription on the input topic.
- _sub = this->create_subscription< sensor_msgs::msg::Image >(
- "/color/image_raw", // TODO Currently color only, we can declare and accept the required
- // streams as ros parameters
- rclcpp::QoS( rclcpp::QoSInitialization::from_rmw( rmw_qos_profile_default ),
- rmw_qos_profile_default ),
- [&, this]( const sensor_msgs::msg::Image::SharedPtr msg ) {
- rclcpp::Time curr_time = this->get_clock()->now();
- auto latency = ( curr_time - msg->header.stamp ).seconds();
- ROS_INFO_STREAM( "Got msg with address 0x"
- << std::hex << reinterpret_cast< std::uintptr_t >( msg.get() )
- << std::dec << " with latency of " << latency << " [sec]" );
- } );
+
+ topic_name = this->declare_parameter("topic_name", topic_name);
+ topic_type = this->declare_parameter("topic_type", topic_type);
+
+ ROS_INFO_STREAM( "Subscribing to Topic: " << topic_name);
+
+ if (topic_type == "image")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "points")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "imu")
+ createListener(topic_name, rmw_qos_profile_sensor_data);
+ else if (topic_type == "metadata")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "camera_info")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "rgbd")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "imu_info")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "tf")
+ createTFListener(topic_name, rmw_qos_profile_default);
+ else
+ ROS_ERROR_STREAM("Specified message type '" << topic_type << "' is not supported");
}
#include "rclcpp_components/register_node_macro.hpp"
diff --git a/realsense2_camera/tools/frame_latency/frame_latency.h b/realsense2_camera/tools/frame_latency/frame_latency.h
index 653c648578..aad8c81c0e 100644
--- a/realsense2_camera/tools/frame_latency/frame_latency.h
+++ b/realsense2_camera/tools/frame_latency/frame_latency.h
@@ -16,6 +16,25 @@
#include
#include "sensor_msgs/msg/image.hpp"
+#include "sensor_msgs/msg/imu.hpp"
+#include "sensor_msgs/msg/point_cloud2.hpp"
+
+#include
+#include
+#include "realsense2_camera_msgs/msg/imu_info.hpp"
+#include "realsense2_camera_msgs/msg/extrinsics.hpp"
+#include "realsense2_camera_msgs/msg/metadata.hpp"
+#include "realsense2_camera_msgs/msg/rgbd.hpp"
+#include "realsense2_camera_msgs/srv/device_info.hpp"
+#include
+#include
+
+#include
+#include
+#include
+#include
+#include
+
namespace rs2_ros {
namespace tools {
@@ -31,8 +50,14 @@ class FrameLatencyNode : public rclcpp::Node
const rclcpp::NodeOptions & node_options
= rclcpp::NodeOptions().use_intra_process_comms( true ) );
+ template
+ void createListener(std::string topicName, const rmw_qos_profile_t qos_profile);
+
+ void createTFListener(std::string topicName, const rmw_qos_profile_t qos_profile);
+
private:
- rclcpp::Subscription< sensor_msgs::msg::Image >::SharedPtr _sub;
+ std::shared_ptr _sub;
+
rclcpp::Logger _logger;
};
} // namespace frame_latency
diff --git a/realsense2_description/urdf/_l515.urdf.xacro b/realsense2_description/urdf/_l515.urdf.xacro
deleted file mode 100644
index c761ddf734..0000000000
--- a/realsense2_description/urdf/_l515.urdf.xacro
+++ /dev/null
@@ -1,213 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/realsense2_description/urdf/_r430.urdf.xacro b/realsense2_description/urdf/_r430.urdf.xacro
index d4ea736bc8..883523359d 100644
--- a/realsense2_description/urdf/_r430.urdf.xacro
+++ b/realsense2_description/urdf/_r430.urdf.xacro
@@ -32,7 +32,6 @@ aluminum peripherial evaluation case.
publishing TF values with actual calibrated camera extrinsic values -->
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/realsense2_description/urdf/test_l515_camera.urdf.xacro b/realsense2_description/urdf/test_l515_camera.urdf.xacro
deleted file mode 100644
index 6b1c3354a0..0000000000
--- a/realsense2_description/urdf/test_l515_camera.urdf.xacro
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-