Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuring PTAM + IMU using your package for forward looking mode #22

Open
saimanoj18 opened this issue Apr 3, 2014 · 1 comment
Open

Comments

@saimanoj18
Copy link

Hi Stephan,

Thank you so much Stephan and other maintainers for developing this framework, making it publicly available and actively maintaining it.

I have opened this issue after reading another issue #21 where you wanted to help in configuring the PTAM + IMU in forward looking mode but the discussion was stopped only after getting to work the downward looking configuration of the PTAM + IMU setup.

Before I move ahead, I understand that it is important to configure the downward looking (PTAM + IMU) mode in the correct way. This is the picture of the sensor+imu setup.
2014-04-03-100126

I am using the RGB camera of Kinect and xsens IMU. This discussion (#21) has helped me figuring out the right way of configuring the transformation matrix b/w camera and IMU. The coordinate frames of camera and imu are aligned as shown below.

screenshot from 2014-04-03 10 13 04

The transformation that is used in the ptam+imu_fix.yaml is gven below


init/q_ci/w: 0.0
init/q_ci/x: -1.0
init/q_ci/y: 0.0
init/q_ci/z: 0.0

init/p_ci/x: 0.0
init/p_ci/y: 0.0
init/p_ci/z: 0.0


With this setup, to test if this setup in downward looking mode is working properly,

I have recorded a data set whose video can be see here.
http://www.youtube.com/watch?v=QAO-I6nDI-g
The PTAM's gui output video, the 3D trajectory and map can be seen here.
http://www.youtube.com/watch?v=5HkOV55WjyA

The 3D trajectory from PTAM is as below.
image2

The 3D trajectory of PTAM + IMU (using your sensor fusion package) is given below.
image1

Till now, I have gone through the tutorials and got them to work. For the questions below, If I have to go through the code in depth or read your thesis. please point them to me.

Q1. So from these, do you think that the current setup of PTAM + IMU in downward looking mode is configured and working well.

Q1.1 After seeing the graph, It makes me feel that PTAM's out looks better than PTAM+IMU's output. Is it true ?

Q1.2 Do you think I have to do some more proper translations and rotations to get the maximum performance from the system.

Q2. If downward looking configuration is good, then could you please help me in configuring the forward looking mode or any arbitrary looking mode ?

Q3. Is the current sensor_fusion package specially designed for downward looking configuration only ? Why so ?

Q4. When I try to run the the current setup in the forward looking mode, it says "FUZZY TRACKING TRIGGERED". What should I do next ?

Thank you so much for your help!

Have a good day,
Sai Manoj

@stephanweiss
Copy link
Contributor

Sai Manoj,

This probably crosses my previous reply to your other post.
In any case:

Q1: the saw-tooth shape of the trajectory suggests that your IMU biases are not converged well. You need some excitation in acceleration and angular velocity to make these converge well. In general, more excitation is better.

Q2: the name of the game for arbitrarily looking pose sensor (e.g. camera, kinect) is to find the initial values of q and q_vw. q is the initial attitude of the IMU w.r.t. a gravity aligned navigation frame. q_vw is the initial rotation between this navigation frame and the frame the sensor pose is perceived in (in this case, this is the ptam map). When initializing PTAM down looking above a fairly horizontal scene, the map will rougly be gravity aligned and q_vw is the unit quaternion. You may find the initial map alignment by having a look at the gravity vector measured by the IMU (this gives you roll and pitch of the IMU - and thus q). Then compute q_vw using q, q_ci, and the attitude measurement of your pose sensor.

Q3: no, it is not specially designed for down looking. Only the initial values in the yaml file are set to match a down looking configuration. This is because our helicopters use down looking cams.

Q4: this is because q_vw and q are not correctly initialized. With the procedure described in Q2 you should be able to find these initial value. Let me know if it worked.

Hope this helps.

Best,
Stephan


From: Prakhya Sai Manoj [[email protected]]
Sent: Wednesday, April 02, 2014 11:13 PM
To: ethz-asl/ethzasl_sensor_fusion
Subject: [ethzasl_sensor_fusion] Configuring PTAM + IMU using your package for forward looking mode (#22)

Hi Stephan,

Thank you so much Stephan and other maintainers for developing this framework, making it publicly available and actively maintaining it.

I have opened this issue after reading another issue #21#21 where you wanted to help in configuring the PTAM + IMU in forward looking mode but the discussion was stopped only after getting to work the downward looking configuration of the PTAM + IMU setup.

Before I move ahead, I understand that it is important to configure the downward looking (PTAM + IMU) mode in the correct way. This is the picture of the sensor+imu setup.
[2014-04-03-100126]https://cloud.githubusercontent.com/assets/4462455/2598992/5e4fea1a-bad5-11e3-97e0-060d12ac73af.jpg

I am using the RGB camera of Kinect and xsens IMU. This discussion (#21#21) has helped me figuring out the right way of configuring the transformation matrix b/w camera and IMU. The coordinate frames of camera and imu are aligned as shown below.

[screenshot from 2014-04-03 10 13 04]https://cloud.githubusercontent.com/assets/4462455/2599016/01cfaf68-bad6-11e3-85c7-bf1ef8bffcd4.png

The transformation that is used in the ptam+imu_fix.yaml is gven below


init/q_ci/w: 0.0
init/q_ci/x: -1.0
init/q_ci/y: 0.0
init/q_ci/z: 0.0

init/p_ci/x: 0.0
init/p_ci/y: 0.0
init/p_ci/z: 0.0


With this setup, to test if this setup in downward looking mode is working properly,

I have recorded a data set whose video can be see here.
http://www.youtube.com/watch?v=QAO-I6nDI-g
The PTAM's gui output video, the 3D trajectory and map can be seen here.
http://www.youtube.com/watch?v=5HkOV55WjyA

The 3D trajectory from PTAM is as below.
[image2]https://cloud.githubusercontent.com/assets/4462455/2600083/c9daf944-baf5-11e3-9f55-07177e8b0be1.png

The 3D trajectory of PTAM + IMU (using your sensor fusion package) is given below.
[image1]https://cloud.githubusercontent.com/assets/4462455/2600084/ce1e57e4-baf5-11e3-93ce-2662b61d942f.png

Till now, I have gone through the tutorials and got them to work. For the questions below, If I have to go through the code in depth or read your thesis. please point them to me.

Q1. So from these, do you think that the current setup of PTAM + IMU in downward looking mode is configured and working well.

Q1.1 After seeing the graph, It makes me feel that PTAM's out looks better than PTAM+IMU's output. Is it true ?

Q1.2 Do you think I have to do some more proper translations and rotations to get the maximum performance from the system.

Q2. If downward looking configuration is good, then could you please help me in configuring the forward looking mode or any arbitrary looking mode ?

Q3. Is the current sensor_fusion package specially designed for downward looking configuration only ? Why so ?

Q4. When I try to run the the current setup in the forward looking mode, it says "FUZZY TRACKING TRIGGERED". What should I do next ?

Thank you so much for your help!

Have a good day,
Sai Manoj


Reply to this email directly or view it on GitHubhttps://github.com//issues/22.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants