-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GICP: add Newton optimizer #5825
Conversation
378ef81
to
2b1d8ca
Compare
It replaces the current BFGS optimizer, which is still available via `useBFGS`. The Newton optimizer uses the hessian matrix of the function, while BFGS only computes an approximation of the hessian. In direct comparison (same GICP cost function to minimize and same starting point/same initial transformation), the Newton optimizer (estimateRigidTransformationNewton) takes only about one third of the time needed by the BFGS optimizer (estimateRigidTransformationBFGS). Every time, the Newton optimizer finds a slightly better transformation (lower cost) than the BFGS optimizer. Note: GICP.align() as a whole becomes only a bit faster (not 3x faster) because it also spends time on nearest-neighbour search. Even in very difficult situations, e.g. when both point clouds are far away from the origin meaning even a small rotation change results in a large point-to-point distance change, the Newton optimizer still finds a good solution, while the BFGS optimizer often fails completely.
M_PI/0.1 would mean 10*M_PI or 5 times 360 degrees. GICP can't reliably find the correct solution for such large rotations (it might e.g. align the object upside down). When using 0.25*M_PI (45 degrees), the transformation is the correct one, and we can even tighten the error threshold to 1e-4
I have tried reading up on the BFGS vs newton method and I find it odd that most of them mention that the BFGS should be faster computationally, since it only approximates the hessian rather than calculating it fully as the newton method, but still you say that the newton implmentation is faster? I guess it comes down to a better / optimized implementation, rather than the algorithms? I haven't directly worked with these optimizations functions, so I can only judge more from an implementation view, but it seems good 👍 Should we have a second unit test, where the BFGS algorithm is used or is it covered in other tests? |
I am not sure whether it is possible to say that one of the methods is always better than the other. Newton has the advantage that is has more information about the function to optimize since it uses the exact Hessian. This is probably the reason why Newton needs fewer iterations than BFGS until convergence, because the Hessian approximation of BFGS is seemingly not so good for our objective function.
No, currently only the Newton method is tested since it is the new default. But I can duplicate the existing GICP test and make it run with BFGS. |
Ah, yeah makes sense. |
It replaces the current BFGS optimizer, which is still available via
useBFGS
.The Newton optimizer uses the hessian matrix of the function, while BFGS only computes an approximation of the hessian.
In direct comparison (same GICP cost function to minimize and same starting point/same initial transformation), the Newton optimizer (estimateRigidTransformationNewton) takes only about one third of the time needed by the BFGS optimizer (estimateRigidTransformationBFGS). Every time, the Newton optimizer finds a slightly better transformation (lower cost) than the BFGS optimizer. Note: GICP.align() as a whole becomes only a bit faster (not 3x faster) because it also spends time on nearest-neighbour search.
Even in very difficult situations, e.g. when both point clouds are far away from the origin meaning even a small rotation change results in a large point-to-point distance change, the Newton optimizer still finds a good solution, while the BFGS optimizer often fails completely.