Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for you to check out dora-rs #22

Closed
magi-1 opened this issue Nov 24, 2024 · 2 comments
Closed

Request for you to check out dora-rs #22

magi-1 opened this issue Nov 24, 2024 · 2 comments

Comments

@magi-1
Copy link

magi-1 commented Nov 24, 2024

dora-rs is a rust alternative to ROS2 and is under active development. I am excited about the idea of doing HITL simulation with Peng via dora-rs. When you have the time, I highly recommend you check out website.

PX4 + ROS is a great option for rapidly building a functioning drone, but it's non-trivial to add CV features to ROS. Peng and dora are both super lightweight yet (near) fully featured. Being able to use a combination of cargo, maturin, and is a dream! Whenever the zero copy GPU feature is added to dora-rs, it will rival isaac_nitros which is EXTREMLY opinionated and restrictive.

I would love to chat with the both of you and discuss the future of this stack. I will be a significant contributor if we can align on direction.

@makeecat @haixuanTao

@makeecat
Copy link
Owner

Hi @magi-1, thank you for your interest and suggestions! @haixuanTao and I know each other. We are working on building a rust-for-robotics community. Discord link is here: https://t.co/L5agLityaT

I am tracking dora-rs development constantly, and I am happy to discuss peng+dora option with you. We can have a chat on discord!

@haixuanTao
Copy link

haixuanTao commented Nov 27, 2024

@magi-1 Following up on this issue, I have opened a PR for dora CUDA 0 copy: https://github.com/dora-rs/dora/pull/722/files

It should be fairly easy to use if you checkout the example in the PR:

For sending:

from dora.cuda import torch_to_buffer

# ...


            buffer, metadata = torch_to_buffer(torch_tensor)
            metadata["time"] = t_send
            metadata["device"] = "cuda"
            node.send_output("latency", buffer, metadata)

For receiving:

from dora.cuda import buffer_to_ipc_handle, cudabuffer_to_torch

# ..

            # storage needs to be spawned in the same file as where it's used. Don't ask me why.
            ipc_handle = buffer_to_ipc_handle(event["value"])
            cudabuffer = ctx.open_ipc_buffer(ipc_handle)
            torch_tensor = cudabuffer_to_torch(cudabuffer, event["metadata"])

Feel free guys to leave a comment or a review :)

@makeecat makeecat closed this as completed Jan 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants