-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix durability of service introspection topics #1068
Comments
Volatile durability makes sense to me. I believe this is what is suggested in the design doc: ros-infrastructure/rep#360
I'm pretty sure that the default durability QoS setting for topics is volatile. If someone wants to change the QoS for a specific service event topic, they can still do that with a runtime config file. |
probably I might be mistaken... currently why do we want to set a static QoS for service event publisher to remove the flexibility for user application? |
@fujitatomoya It apparently can, yet doing something like:
Results in a transient local durability on the published result (or at least on the published result being sent upon subscription), which looks strange to me. |
For completeness, here are the details of the published topic when using SystemDefaultQoS on both service and client:
|
@nirwester yes i understand, and that because application sets i mean, can user not do the following? that explicitly sets service->configure_introspection(node->get_clock(),
rclcpp::ServicesQoS(),
RCL_SERVICE_INTROSPECTION_CONTENTS); |
@fujitatomoya Yes, I guess my confusion stems from the fact that RMW_QOS_POLICY_DURABILITY_SYSTEM_DEFAULT is behaving as TRANSIENT_LOCAL, while I expected it to behave as volatile. If I manually configure it to be something else that uses volatile, then it works as expected. There might be no issue then, but why is RMW_QOS_POLICY_DURABILITY_SYSTEM_DEFAULT behaving as transient local? |
Currently, The behavior of switch (qos_policies.durability) {
case RMW_QOS_POLICY_DURABILITY_TRANSIENT_LOCAL:
entity_qos.durability().kind = eprosima::fastdds::dds::TRANSIENT_LOCAL_DURABILITY_QOS;
break;
case RMW_QOS_POLICY_DURABILITY_VOLATILE:
entity_qos.durability().kind = eprosima::fastdds::dds::VOLATILE_DURABILITY_QOS;
break;
case RMW_QOS_POLICY_DURABILITY_SYSTEM_DEFAULT:
break;
default:
RMW_SET_ERROR_MSG("Unknown QoS durability policy");
return false;
} the default value for publisher(writer) is
case RMW_QOS_POLICY_DURABILITY_SYSTEM_DEFAULT:
case RMW_QOS_POLICY_DURABILITY_VOLATILE:
dds_qset_durability(qos, DDS_DURABILITY_VOLATILE); switch (qos_policies->durability) {
case RMW_QOS_POLICY_DURABILITY_SYSTEM_DEFAULT:
{
break;
} I am not able to find out the doc about the default setting, but the source code. /opt/rti.com/rti_connext_dds-6.0.1/include/ndds/dds_c/dds_c_infrastructure.h:1417
If we use |
Understood, thanks a lot for the clear explanation! Given these differences, that can deeply influence the behavior of the application, I'll stay away from rclcpp::SystemDefaultsQoS :) |
@iuhilnehc-ynos thanks for the follow-up. @nirwester if you find any other issues, please feel free to open it. btw, i was thinking that probably we can set the default QoS with durability |
Hi, while testing the new service introspection topic, I noticed an anomaly on the durability of the published service response.
If executing the following sequence of actions:
-starting a server
-starting a client
-wait some seconds to make sure that the execution is finished
-subscribe to the introspection topic (ex ros2 topic echo /add_two_ints/_service_event)
The last response sent by the server is received, even if the service is no more running. Ex:
info:
event_type: 2
stamp:
sec: 1683653877
nanosec: 751816926
client_gid:
...
sequence_number: 1
request: []
response:
sum: 3
I used this code, where the basic services example was modified to enable full intropsection:
https://github.com/nirwester/ros2_journey_examples/tree/master/homeworks/chap4/service_with_introspection
Running on this docker container (jammy-Iron):
osrf/docker_images#673
This would seem an issue in the durability of the corresponding publisher, probably set to transient_local. I don't know if there's any technical reason for this kind of implementation, but getting the last published results upon subscription seems confusing to me.
The problem was reproduced by @clalancette that proposed two possible solutions:
The volatile choice sounds like the more reasonable, but we might be missing something.
The text was updated successfully, but these errors were encountered: