Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difficulty finding Pose Estimation Annotations in Dataset #2

Open
DanielChaseButterfield opened this issue Nov 23, 2024 · 2 comments
Open

Comments

@DanielChaseButterfield
Copy link

Thank you for this exciting dataset, many researchers in our lab are quite excited to use your work.

Unfortunately, I'm having trouble locating a certain part of the dataset. Your paper mentions that you provide pose estimation annotations, and if I'm understanding correctly, the paper seems to indicate that these annotations come from individual state estimation algorithms for each robot (Section V-A). However, I reviewed the dataset and the .bag and .json file contents, and couldn't seem to locate these anywhere.

Do you know where I could find these annotations? Or is it possible that they aren't ready for the public yet or simply aren't included? Thanks for your time.

@DanielChaseButterfield
Copy link
Author

Upon further investigation, it appears that the Wanda robot has a geometry_msgs/PoseStamped message type under the topic /wanda/pose for the FOREST_wanda_arl_outtdoor_2023_05_19_06_2023-05-19-16-27-46.bag file. Could these be the pose annotations that the authors were referring to?

@DanielChaseButterfield
Copy link
Author

DanielChaseButterfield commented Feb 25, 2025

For testing sake, I ran both FOREST_wanda_arl_outtdoor_2023_05_19_06_2023-05-19-16-27-46.bag and FOREST_wilbur_arl_outdoor_2023_05_19_06_2023-05-19-16-27-47.bag at the same time, and published a static transform setting /wanda/map and /wilbur/map to the same frame. Upon doing this, I get the following visuals in Rviz:

Image

THe robot's don't move together in a straight line, as would be expected from reviewing the RGB imagery.

Thus, I think it's likely that if this is the pose topic mentioned, then these poses are single-agent poses based on single agent SLAM optimization with GPS data, instead of multi-robot poses that are all defined in a similar frame. Though, I noticed that the /wanda/map and /wilbur/odom frames are also not in the same spot, so it's possible that the two odom frames from both robots line up. But, setting up a test like that would be more complicated, and I think it's unlikely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant