Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

360º processing #1

Open
Cram3r95 opened this issue Apr 22, 2023 · 3 comments
Open

360º processing #1

Cram3r95 opened this issue Apr 22, 2023 · 3 comments

Comments

@Cram3r95
Copy link

Hi,

This is a fantastic work! I am a PhD focused on Deep Learning based Motion Prediction (MP), and I would like to implement your algorithm to provide these trackers to my MP model (https://github.com/Cram3r95/argo2goalmp), as the final part of my thesis (Chapter 5. Applications), in particular in the CARLA simulator.

Nevertheless, in order to predict the different objects, we usually require 360º processing, and I think most SOTA MOT algorithms that are fused with camera information are focused on front camera field of view, since they are validated in KITTI. Do you know if your algorithm would be easily extended to 360º Multi-Modal MOT?

Again, fantastic work. Keep going.

@wangxiyang2022
Copy link
Owner

Hi,
Thank you for your attention! Our algorithm can support the fusion of 360-degree views from both cameras and LiDAR. You can refer to the code for running the Waymo dataset section. We have implemented the fusion of five cameras and LiDAR in the Waymo dataset.

@Cram3r95
Copy link
Author

Cram3r95 commented Apr 22, 2023

@wangxiyang2022 oh, that's wonderful. I thought both KITTI and Waymo were only evaluated in the front-camera field of view. Then, Waymo is evaluated in the 360º space, right?

Just seen you are also the author of DeepFusionMOT. Does DeepFusionMOT / StrongMOT also work in the 360º space? Which algorithm should I use for my purpose? (A robust 360º MOT pipeline to feed the past trajectories to a MP model)

One last question: When increasing / decreasing the number of cameras, and assuming just 1 LiDAR, do you need to retrain the model or the number of cameras and LiDAR channels may be modified w.r.t. your proposed pipeline?

E.g. From 1 LiDAR 64-channels and 5 cameras to 1 LiDAR 128-channels and 4 cameras. (Nevertheless, I must read your paper)

@Cram3r95
Copy link
Author

@wangxiyang2022 On the other hand, why do you use Kalman Filter instead of EKF / UKF, which are more suitable for Autonomous Driving purposes in order to model non-linearities?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants