Skip to content

Commit

Permalink
ENH: Add checks to ensure vertex cropping bounds are not the same. (#888
Browse files Browse the repository at this point in the history
)

Signed-off-by: Michael Jackson <[email protected]>
  • Loading branch information
imikejackson authored Mar 14, 2024
1 parent eed0c9b commit e9f0542
Show file tree
Hide file tree
Showing 3 changed files with 84 additions and 26 deletions.
38 changes: 30 additions & 8 deletions src/Plugins/SimplnxCore/docs/IterativeClosestPointFilter.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,43 @@
# IterativeClosestPoint
# Iterative Closest Point

## Group (Subgroup)

Reconstruction (Alignment)

## Description

This **Filter** estimates the rigid body transformation (i.e., rotation and translation) between two sets of points represted by **Vertex Geometries** using the *iterative closest point* (ICP) algorithm. The two **Vertex Geometries** are not required to have the same number of points. The **Filter** first initializes temporary storage for each set of points and a global transformation. Then, the alignment algorithm iterates through the following steps:
This **Filter** estimates the rigid body transformation (i.e., rotation and translation) between two sets of points represented
by **Vertex Geometries** using the *iterative closest point* (ICP) algorithm. The two **Vertex Geometries** are not required
to have the same number of points.

1. The closest point in the target geometry is determined for each point in the moving geoemetry; these are called the correspondence points. "Closeness" is measured based on Euclidean distance.
2. The rigid body transformation between the moving and target correspondences is solved for analytically using least squares.
3. The above transformation is applied to the moving points.
4. The global transformation is updated with the transformation computed for the current iteration.
The Iterative Closest Point (ICP) algorithm is a method used to minimize the difference between two clouds of points,
which we can describe in terms of "moving geometry" and "target geometry." The basic flow of the algorithm is:

Iterations proceed for a fixed number of user-defined steps. The final rigid body transformation is stored as a 4x4 transformation matrix in row-major order. The user has the option to apply this transformation to the moving **Vertex Geometry**. Note that this transformation is applied to the moving geometry *in place* if the option is selected.
1. Initial State: We start with two sets of points or geometries. The "moving geometry" is the one we aim to align with
the "target geometry." Initially, the moving geometry may be in any orientation or position relative to the target geometry.

ICP has a number of advantages, such as robustness to noise and no requirement that the two sets of points to be the same size. However, peformance may suffer if the two sets of points are of siginficantly different size.
1. Identify Correspondences: For each point in the moving geometry, we find the closest point in the target geometry. These
pairs of points are considered correspondences, based on the assumption that they represent the same point in space
after the moving geometry is properly aligned.

1. Estimate Transformation: With the correspondences identified, the algorithm calculates the optimal rigid body
transformation (which includes translation and rotation) that best aligns the moving geometry to the target
geometry. This step often involves minimizing a metric, such as the sum of squared distances between corresponding
points, to find the best fit.

1. Apply Transformation: The calculated transformation is applied to the moving geometry, aligning it closer to the
target geometry.

1. Iterate: Steps 2 through 4 are repeated iteratively. With each iteration, the moving geometry is brought into closer
alignment with the target geometry. The iterations continue until a stopping criterion is met, which could be a predefined
number of iterations, a minimum improvement threshold between iterations, or when the change in the error metric falls
below a certain threshold.

Final Alignment: Once the iterations stop, the algorithm concludes with the moving geometry optimally aligned to the target
geometry, based on the criteria set for minimizing the differences between them.

ICP has a number of advantages, such as robustness to noise and no requirement that the two sets of points to be the same
size. However, performance may suffer if the two sets of points are of significantly different size.

% Auto generated parameter table will be inserted here

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,8 @@ Parameters CropVertexGeometry::parameters() const
GeometrySelectionParameter::AllowedTypes{IGeometry::Type::Vertex}));
params.insert(std::make_unique<DataGroupCreationParameter>(k_CroppedGeom_Key, "Cropped Vertex Geometry", "Created VertexGeom path", DataPath{}));
params.insert(std::make_unique<DataObjectNameParameter>(k_VertexDataName_Key, "Vertex Data Name", "Name of the vertex data AttributeMatrix", INodeGeometry0D::k_VertexDataName));
params.insert(std::make_unique<VectorFloat32Parameter>(k_MinPos_Key, "Min Pos", "Minimum vertex position", std::vector<float32>{0, 0, 0}, std::vector<std::string>{"X", "Y", "Z"}));
params.insert(std::make_unique<VectorFloat32Parameter>(k_MaxPos_Key, "Max Pos", "Maximum vertex position", std::vector<float32>{0, 0, 0}, std::vector<std::string>{"X", "Y", "Z"}));
params.insert(std::make_unique<VectorFloat32Parameter>(k_MinPos_Key, "Min Pos", "Minimum vertex position", std::vector<float32>{0.0f, 0.0f, 0.0f}, std::vector<std::string>{"X", "Y", "Z"}));
params.insert(std::make_unique<VectorFloat32Parameter>(k_MaxPos_Key, "Max Pos", "Maximum vertex position", std::vector<float32>{0.0f, 0.0f, 0.0f}, std::vector<std::string>{"X", "Y", "Z"}));
params.insert(std::make_unique<MultiArraySelectionParameter>(k_TargetArrayPaths_Key, "Vertex Data Arrays to crop", "The complete path to all the vertex data arrays to crop", std::vector<DataPath>(),
MultiArraySelectionParameter::AllowedTypes{IArray::ArrayType::DataArray}, nx::core::GetAllDataTypes()));
return params;
Expand Down Expand Up @@ -117,6 +117,32 @@ IFilter::PreflightResult CropVertexGeometry::preflightImpl(const DataStructure&

OutputActions actions;

{
std::vector<Error> errors;
bool xDimError = (xMax == xMin);
bool yDimError = (yMax == yMin);
bool zDimError = (zMax == zMin);
if(xDimError)
{
std::string ss = fmt::format("X Max ({}) is equal to X Min ({}). There would be no points remaining in the geometry.", xMax, xMin);
errors.push_back(Error{-58550, std::move(ss)});
}
if(yDimError)
{
std::string ss = fmt::format("Y Max ({}) is equal to Y Min ({}). There would be no points remaining in the geometry.", yMax, yMin);
errors.push_back(Error{-58551, std::move(ss)});
}
if(zDimError)
{
std::string ss = fmt::format("Z Max ({}) is equal to Z Min ({}). There would be no points remaining in the geometry.", zMax, zMin);
errors.push_back(Error{-58552, std::move(ss)});
}
if(xDimError || yDimError || zDimError)
{
return PreflightResult{{nonstd::make_unexpected(errors)}};
}
}

{
std::vector<Error> errors;
bool xDimError = (xMax < xMin);
Expand All @@ -125,17 +151,17 @@ IFilter::PreflightResult CropVertexGeometry::preflightImpl(const DataStructure&
if(xDimError)
{
std::string ss = fmt::format("X Max ({}) less than X Min ({})", xMax, xMin);
errors.push_back(Error{-5550, std::move(ss)});
errors.push_back(Error{-58553, std::move(ss)});
}
if(yDimError)
{
std::string ss = fmt::format("Y Max ({}) less than Y Min ({})", yMax, yMin);
errors.push_back(Error{-5550, std::move(ss)});
errors.push_back(Error{-58554, std::move(ss)});
}
if(zDimError)
{
std::string ss = fmt::format("Z Max ({}) less than Z Min ({})", zMax, zMin);
errors.push_back(Error{-5550, std::move(ss)});
errors.push_back(Error{-58555, std::move(ss)});
}
if(xDimError || yDimError || zDimError)
{
Expand All @@ -146,7 +172,7 @@ IFilter::PreflightResult CropVertexGeometry::preflightImpl(const DataStructure&
auto* vertexAM = vertexGeom.getVertexAttributeMatrix();
if(vertexAM == nullptr)
{
return {MakeErrorResult<OutputActions>(-5551, "Could not find vertex data AttributeMatrix in selected Vertex Geometry"), {}};
return {MakeErrorResult<OutputActions>(-58556, "Could not find vertex data AttributeMatrix in selected Vertex Geometry"), {}};
}
auto tupleShape = vertexAM->getShape();
usize numTuples = std::accumulate(tupleShape.cbegin(), tupleShape.cend(), static_cast<usize>(1), std::multiplies<>());
Expand Down Expand Up @@ -187,7 +213,7 @@ Result<> CropVertexGeometry::executeImpl(DataStructure& dataStructure, const Arg
auto zMax = posMax[2];

auto& vertices = dataStructure.getDataRefAs<VertexGeom>(vertexGeomPath);
int64 numVerts = vertices.getNumberOfVertices();
int64 numVerts = static_cast<int64>(vertices.getNumberOfVertices());
auto* verticesPtr = vertices.getVertices();
auto& allVerts = verticesPtr->getDataStoreRef();
std::vector<int64> croppedPoints;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ constexpr int32 k_MissingMovingVertex = -4500;
constexpr int32 k_MissingTargetVertex = -4501;
constexpr int32 k_BadNumIterations = -4502;
constexpr int32 k_MissingVertices = -4503;
constexpr int32 k_EmptyVertices = -4505;

template <typename Derived>
struct VertexGeomAdaptor
Expand All @@ -31,10 +32,10 @@ struct VertexGeomAdaptor
size_t m_NumComponents = 0;
size_t m_NumTuples = 0;

VertexGeomAdaptor(const Derived& obj_)
explicit VertexGeomAdaptor(const Derived& obj_)
: obj(obj_)
{
// These values never change for the life time of this object so cache them now.
// These values never change for the lifetime of this object so cache them now.
verts = derived()->getVertices();
m_NumComponents = verts->getNumberOfComponents();
m_NumTuples = verts->getNumberOfTuples();
Expand Down Expand Up @@ -187,10 +188,19 @@ Result<> IterativeClosestPointFilter::executeImpl(DataStructure& data, const Arg
return {nonstd::make_unexpected(std::vector<Error>{Error{k_MissingVertices, ss}})};
}

auto* movingPtr = movingVertexGeom->getVertices();
Float32Array& targetPtr = *(targetVertexGeom->getVertices());

auto* movingStore = movingPtr->getDataStore();
Float32Array& movingVerticesRef = *(movingVertexGeom->getVertices());
if(movingVerticesRef.empty())
{
auto ss = fmt::format("Moving Vertex Geometry does not contain any vertices");
return {nonstd::make_unexpected(std::vector<Error>{Error{k_EmptyVertices, ss}})};
}
Float32Array& targetVerticesRef = *(targetVertexGeom->getVertices());
if(targetVerticesRef.empty())
{
auto ss = fmt::format("Target Vertex Geometry does not contain any vertices");
return {nonstd::make_unexpected(std::vector<Error>{Error{k_EmptyVertices, ss}})};
}
auto* movingStore = movingVerticesRef.getDataStore();
std::vector<float32> movingVector(movingStore->begin(), movingStore->end());
float32* movingCopyPtr = movingVector.data();
DataStructure tmp;
Expand Down Expand Up @@ -236,9 +246,9 @@ Result<> IterativeClosestPointFilter::executeImpl(DataStructure& data, const Arg
nanoflann::KNNResultSet<float> results(nn);
results.init(&identifier, &dist);
index.findNeighbors(results, movingCopyPtr + (3 * j), nanoflann::SearchParams());
dynTargetPtr[3 * j + 0] = targetPtr[3 * identifier + 0];
dynTargetPtr[3 * j + 1] = targetPtr[3 * identifier + 1];
dynTargetPtr[3 * j + 2] = targetPtr[3 * identifier + 2];
dynTargetPtr[3 * j + 0] = targetVerticesRef[3 * identifier + 0];
dynTargetPtr[3 * j + 1] = targetVerticesRef[3 * identifier + 1];
dynTargetPtr[3 * j + 2] = targetVerticesRef[3 * identifier + 2];
}

Eigen::Map<PointCloud> moving_(movingCopyPtr, 3, numMovingVerts);
Expand Down Expand Up @@ -271,11 +281,11 @@ Result<> IterativeClosestPointFilter::executeImpl(DataStructure& data, const Arg
{
for(usize j = 0; j < numMovingVerts; j++)
{
Eigen::Vector4f position((*movingPtr)[3 * j + 0], (*movingPtr)[3 * j + 1], (*movingPtr)[3 * j + 2], 1);
Eigen::Vector4f position(movingVerticesRef[3 * j + 0], movingVerticesRef[3 * j + 1], movingVerticesRef[3 * j + 2], 1);
Eigen::Vector4f transformedPosition = globalTransform * position;
for(usize k = 0; k < 3; k++)
{
(*movingPtr)[3 * j + k] = transformedPosition.data()[k];
movingVerticesRef[3 * j + k] = transformedPosition.data()[k];
}
}
}
Expand Down

0 comments on commit e9f0542

Please sign in to comment.