Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extrinsics between lidar/radar with camera #16

Open
ashnarayan13 opened this issue Nov 28, 2020 · 4 comments
Open

Extrinsics between lidar/radar with camera #16

ashnarayan13 opened this issue Nov 28, 2020 · 4 comments

Comments

@ashnarayan13
Copy link

Hi,

Thanks for the datasets. I don't see/find the extrinsics between the sensors. Is this available? Even a rough estimate of the extrinsics would be a nice to have.

Thanks!

@yzrobot
Copy link
Contributor

yzrobot commented Dec 4, 2020

Hi, This is what we are currently able to provide and we will try our best to provide more complete extrinsics. We recommend currently that users could try some online calibration methods to obtain richer calibration if they need them in a short time. Cheers.

@frohlichr
Copy link

frohlichr commented Jan 11, 2021

Hi @yzrobot
We would like to use your dataset in a way, where the ground truth pose of the cameras (or practically of the vehicle body, or some sensor, that we have a precise relative pose to from each camera) should be available relative to the pointcloud!
So practically we need to be able to merge all the pointclouds (the individual scans) into one big data, and project it precisely on any selected image frame, perspective stereo or omnidirectional camera image also.

Can you please elaborate if you consider these details can be obtained from your dataset? In what coordinate system is the ground truth RTK-GPS data stored, is a conversion provided to convert between this and the local coordinate system? Are all the sensors calibrated relative to each other? I see in your previous post that only some of the sensors have an extrinsic provided.

Thank you in advance for your kind help,
Best regards,
Robert

@yzrobot
Copy link
Contributor

yzrobot commented Jan 22, 2021

Hi, sorry for the late reply and I understand your concern. From my points of view, this could be done with our dataset. However as you know that we are currently incapable to provide the full extrinsic calibration, things get a little bit tricky. We therefore recommend that users try to preform some online calibrations with the data, and we also have plans to provide the corresponding toolkits, but it will take some time. The ground truth RTK-GPS data stored in navsat (you could consider of it as the GPS receiver). Cheers.

@hichem-abdellali
Copy link

hichem-abdellali commented Sep 16, 2021

Hi @yzrobot

I am also interested in merging all the point cloud into a big dense one as @frohlichr , unfortunately, I couldn't find the camera pose and how can we project the point cloud into the images, can you please explain briefly how to build the camera pose for each frame from the data stored in the rosbag ?

Thank you in advance for your kind help,
Best regards,
Hichem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants