Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with the initialization of ethzasl_sensor_fusion #32

Open
erwin14 opened this issue Dec 12, 2014 · 8 comments
Open

Problem with the initialization of ethzasl_sensor_fusion #32

erwin14 opened this issue Dec 12, 2014 · 8 comments

Comments

@erwin14
Copy link

erwin14 commented Dec 12, 2014

Hello,
I have problems concerning the initialization of ethzasl_sensor_fusion. I try to simulate ethzasl_sensor_fusion with ethzasl_ptam in a Gazebo simulation (robot = PR2).
My World-Frame is a 2D-Map (map-frame). I´m initializing the filter with the init-checkbox in dynamic reconfigure.

My Question is how to get the right values for q_ci, p_ci and q_wv in the pose_sensor_fix.yaml file?
Are there other steps necessary for the initialization or is it enough to find the correct values for q_ci, p_ci and q_wv?

I tried the following:

  1. For q_ci and p_ci:
    rosrun tf tf_echo 'camera-frame' 'imu-frame'
  2. For q_wv:
    rosrun tf tf_echo 'map-frame' 'imu-frame' => q1
    rosrun tf tf_echo 'imu-frame' 'camera-frame' => q2
    rostopic echo vslam/pose => q3
    q_wv = q1 * q2 * q3

I´m not sure if this is correct.
Thanks in advance.

@stephanweiss
Copy link
Contributor

Hi,

I am not familiar with Gazebo but generally if you initialize q_ci, p_ci, q_wv, and the scale factor correctly, the filter should work well. I assume you can get these values from your simulation. You may want to verify if the value of the initialized scale (and the other states) is indeed near the true value.

Best,
Stephan


From: erwin14 [[email protected]]
Sent: Friday, December 12, 2014 5:33 AM
To: ethz-asl/ethzasl_sensor_fusion
Subject: [ethzasl_sensor_fusion] Problem with the initialization of ethzasl_sensor_fusion (#32)

Hello,
I have problems concerning the initialization of ethzasl_sensor_fusion. I try to simulate ethzasl_sensor_fusion with ethzasl_ptam in a Gazebo simulation (robot = PR2).
My World-Frame is a 2D-Map (map-frame). I´m initializing the filter with the init-checkbox in dynamic reconfigure.

My Question is how to get the right values for q_ci, p_ci and q_wv in the pose_sensor_fix.yaml file?
Are there other steps necessary for the initialization or is it enough to find the correct values for q_ci, p_ci and q_wv?

I tried the following:

  1. For q_ci and p_ci:
    rosrun tf tf_echo
  2. For q_wv:
    rosrun tf tf_echo => q1
    rosrun tf tf_echo => q2
    rostopic echo vslam/pose => q3
    q_wv = q1 * q2 * q3

I´m not sure if this is correct.
Thanks in advance.


Reply to this email directly or view it on GitHubhttps://github.com//issues/32.

@simonlynen
Copy link
Contributor

@erwin14 lets assume your measurements are generated by projecting your 2D map into the sensor-frame of reference, then your p_ic/q_ic should be the transformation that expresses the pose of the sensor frame of reference w.r.t. the imu frame of reference.

q_wv Should be set to identity. (your derivation is not correct) This quantity can be used to estimate a roll/pitch offset between your map frame and the gravity aligned world frame. You should write your simulation s.t. the z-direction of your map is gravity aligned, then this quantity is irrelevant.

@erwin14
Copy link
Author

erwin14 commented Dec 13, 2014

Hi Stephan and Simon,

thanks for your fast response.

What I've learned from other issues is that q_wv is the unit quaternion if the world-frame (in my case the 2d-map) and the vision-frame (ptam-frame) are both gravity aligned.
But this is not the case in my configuration.

My aim is to use ethzasl_sensor_fusion to get pose estimates of the robot in a 2d-map similar to amcl.

So I'm still not sure how to get the correct values for q_wv and the scale factor.

Thanks in advance.

@stephanweiss
Copy link
Contributor

Can you elaborate in more detail why your vision frame is not aligned with gravity in the 2D case?
Is the vision frame alignment arbitrary and unknown? If so, you may need to observe the currently measured acceleration of your robot in still mode (i.e. measure the gravity vector) and compute roll and pitch of your vision frame w.r.t. that vector.

Best,
Stephan


From: erwin14 [[email protected]]
Sent: Saturday, December 13, 2014 5:02 AM
To: ethz-asl/ethzasl_sensor_fusion
Cc: Stephan Weiss
Subject: Re: [ethzasl_sensor_fusion] Problem with the initialization of ethzasl_sensor_fusion (#32)

Hi Stephan and Simon,

thanks for your fast response.

What I've learned from other issues is that q_wv is the unit quaternion if the world-frame (in my case the 2d-map) and the vision-frame (ptam-frame) are both gravity aligned.
But this is not the case in my configuration.

My aim is to use ethzasl_sensor_fusion to get pose estimates of the robot in a 2d-map similar to amcl.

So I'm still not sure how to get the correct values for q_wv and the scale factor.

Thanks in advance.


Reply to this email directly or view it on GitHubhttps://github.com//issues/32#issuecomment-66875658.

@erwin14
Copy link
Author

erwin14 commented Dec 15, 2014

The Vision-Frame(PTAM-Frame) is not aligned with gravity because the camera is front-looking.
Could you describe in some more detail how I could compute a pose estimate of the robot in a 2d-map with the help of ethzasl_sensor_fusion?

I still have no idea how this could work.

Thanks in advance.

@stephanweiss
Copy link
Contributor

The alignment of the camera has no influence if the Vision frame is aligned with gravity or not. I assumed your vision frame is aligned with gravity since you are working on a 2D (gravity aligned) map. The alignment of the vision frame is according to the dominant plane of the first triangulated features (i.e. no matter how the camera is looking at a gravity aligned flat floor, the vision frame will be gravity aligned).
So if my assumption on your 2D map is correct, your q_vw will be a unit quaternion.

Best,
Stephan


From: erwin14 [[email protected]]
Sent: Sunday, December 14, 2014 6:54 PM
To: ethz-asl/ethzasl_sensor_fusion
Cc: Stephan Weiss
Subject: Re: [ethzasl_sensor_fusion] Problem with the initialization of ethzasl_sensor_fusion (#32)

The Vision-Frame(PTAM-Frame) is not aligned with gravity because the camera is front-looking.
Could you describe in some more detail how I could compute a pose estimate of the robot in a 2d-map with the help of ethzasl_sensor_fusion?

I still have no idea how this could work.

Thanks in advance.


Reply to this email directly or view it on GitHubhttps://github.com//issues/32#issuecomment-66944041.

@tmagcaya
Copy link

So even if the camera is pitched 90 degrees form a downward looking frame (thus making it a forward looking) and we use a wall for grid initialization, we have to use a unit quaternion?

@Booooooosh
Copy link

Thanks for sharing your work and it really can help me doing the fusion between the output of the svo and the imu!

But still I've met some problems with regard to initialization, sorry my question might be length, and thanks in advance for your time~:

  1. I've read through the topic but still confused about the q_wv and the scale of the vo, I am not quite sure how to set these two values. The following is the outlay of my imu and the camera(all the coordinates follows the right-hand rules), so basically the imu frame's z direction points upward and the downward looking camera just rotates 180 degrees in x axis, so I initialize q_ci to (0, -1, 0, 0) and p_ci to the translation b/w the camera and the imu. But what about the q_wv? Actually I am confused about what is the q_wv?
    cameraimu

  2. About the scale, so I read the code and found out that actually the scale is getting zero propagated, so it will not change during the algorithm, so does that mean I have to set the scale value to the exact value of the camera? If so, how can I get this scale? (I am currently using a monocular camera with the svo package provided by Robotics Perception Group)

  3. Also, the bias of the imu are not being updated during the algorithm in the state propagation, is it supposed to be this? Then why there are bias noises parameters in the yaml file?

Thanks very much!

Bosch Tang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants