Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared Memory on C++ API #363

Open
wants to merge 19 commits into
base: rolling
Choose a base branch
from

Conversation

yellowhatter
Copy link
Contributor

SHM functionality implemented on C++ API

Transparently places ROS messages that are above configurable size threshold into shared-memory.

struct ShmContext
{
zenoh::PosixShmProvider shm_provider;
size_t msgsize_threshold;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include <cstddef>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

namespace rmw_zenoh_cpp
{
///=============================================================================
ShmContext::ShmContext(size_t alloc_size, size_t msgsize_threshold)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include <cstddef>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in shm_context.hpp

throw std::runtime_error("Unable to create shm provider.");
}
shm_provider_ = std::move(provider);
#ifdef RMW_ZENOH_BUILD_WITH_SHARED_MEMORY
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will say that I'm not a huge fan of wrapping this in a #ifdef this way. When we release it on the buildfarm, we can only choose one define here, so we may as well just unconditionally add it.

In other words, if we think this feature is stable enough to use, we should just always build it in. If it is not yet stable enough to use, then we shouldn't merge the PR until we are confident in it.

Copy link
Contributor Author

@yellowhatter yellowhatter Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This define is not about the stability but about binary size. SHM is relatively big subsystem of Zenoh with additional dependencies (and it will get even larger in the future) and some (yet small) wire, computation and memory overhead. We have shared-memory feature conditionally compiled through the whole bunch of our projects: zenoh -> zenoh-c -> zenoh-cpp, so supporting this in zenoh_rmw makes sense for users who care about binary size. You can release RMW with RMW_ZENOH_BUILD_WITH_SHARED_MEMORY=true (which is true by default), but there still will be an option for users to make custom build without SHM.

static const char * zenoh_shm_enabled_envar = "ZENOH_SHM_ENABLED";
static const bool zenoh_shm_enabled_default = true;
static const char * zenoh_shm_alloc_size_envar = "ZENOH_SHM_ALLOC_SIZE";
static const size_t zenoh_shm_alloc_size_default = 16 * 1024 * 1024;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the behavior of the shared-memory code if a message is published that is larger than this? Given my comment above, I think that we should make it fall-back to the "regular" allocation path and not use shared-memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently rmw_zenoh falls back to common allocation if for any reason the SHM allocation failed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants