Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transform of input map #6

Open
MartinEekGerhardsen opened this issue Mar 18, 2022 · 4 comments
Open

Transform of input map #6

MartinEekGerhardsen opened this issue Mar 18, 2022 · 4 comments

Comments

@MartinEekGerhardsen
Copy link

Hello!

I'd first like to thank you for this work, it's very interesting!

I have a question regarding the internal representation of the map: when looking through the code I notice that you subtract the minimum values from each axis of the points. I suppose this is relevant for the method? I got some (obviously) poor results when I assumed the input map and internal representation were the same.

I think it would be nice to make this clearer in the readme, or potentially add some transform between the original map and the internal representation such that the initial position set in the launch file could be relative the original map.

@fercabe
Copy link
Member

fercabe commented Mar 21, 2022

Hi Martin,

Thanks for your interest in our research work.

Regarding your question, yes, the grid representation (the Distance Field) of the 3D map is shifted so that all grid cells are located starting in (0,0,0) and above. This is just a trick to save some computation during the map queries.

How is this computation saved? Basically, the 3D grid is a quantization of the actual 3D world according to the given resolution. The index into the grid that corresponds to the real position (X, Y, Z) is computed by the method "point2grid(...)" of the "Grid3d" class. During the optimization process, this method might be called up to a million times for a single large point cloud, because the optimizer evaluates every single point, and it does in several iterations (normally from 10 to 20). So, the faster the index can be computed, the better.

Shifting the map so that it starts in (0,0,0) and above reduces the number of operations performed in "point2grid", eliminating some subtractions and multiplications. Considering this method is called so massively, we though the grid shifting was a minor problem in favour of computation.

You are right regarding the need to clarify this issue into the README file, so that users can be advised from the beginning. I will include this information into the README in the next days. Thanks very much for your suggestion.

Fernando.

@MartinEekGerhardsen
Copy link
Author

Hi Fernando!

Thank you for the clarification, that makes a lot of sense!

Could it be possible to keep an unshifted point cloud for visualization and then shift the output transform? That way the internal representation and computational saving can remain and the results shared out of the package can be in the "expected" frame (that is the original octomap frame). If I have time I might look into doing this some time in the future.

For now I added a static transform

<node pkg="tf" type="static_transform_publisher" name="unshifter" args="(arg negated_min_x) (arg negated_min_y) (arg negated_min_z)** 0 0 0 global map 10" />

where global is the global_frame_id and map is the "unshifted" frame. Something like this could potentially also be added internally so that the negated minimum values for the axes can be automatically added.

Martin

@fercabe
Copy link
Member

fercabe commented Mar 21, 2022

Hi Martin,

Ok, the suggestion make sense, and it should be easy to implement. Instead of TF, I will include such offset into the position estimation. I will implement it during this week, hopefully.

I will keep this issue open until I fix it.

Regards,
Fernando

@JiaoxianDu
Copy link

JiaoxianDu commented May 20, 2024

Due to the need to test positioning accuracy in a simulation environment, I have made some modifications.
First, it is necessary to define a public method in grid3d.hpp that can return m_offsetX/Y/Z.

	std::vector<double> getOffsets(){
		std::vector<double> offsets = {m_offsetX, m_offsetY, m_offsetZ}; 
		return offsets;
	}

Then, after aquiring the offsets, modify the following in dllnode.hpp:
from:

		tx = mapTf.getOrigin().getX();
		ty = mapTf.getOrigin().getY();
		tz = mapTf.getOrigin().getZ();
                .....
                m_lastGlobalTf = tf::Transform(q, tf::Vector3(tx, ty, tz))*odomTf.inverse();

to:

		tx = mapTf.getOrigin().getX() - m_offsetX;
		ty = mapTf.getOrigin().getY() - m_offsetY;
		tz = mapTf.getOrigin().getZ() - m_offsetZ;
                .....
                m_lastGlobalTf = tf::Transform(q, tf::Vector3(tx + m_offsetX, ty + m_offsetY, tz + m_offsetZ))*odomTf.inverse();

This way allows the estimated pose to be output in the "real" world coordinate system without compromising the dllsolver code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants