Replies: 2 comments 1 reply
-
This is a valid idea (not sure why you closed it), and we could find a solution. However, taking a step back and looking at the high level picture, we might not even want the point cloud, we are interested in directly getting the mesh if possible. So how about developing a GPU fusion that takes as input a set of depth-maps and outputs a mesh, for ex something like TSDF with visibility information. |
Beta Was this translation helpful? Give feedback.
-
Sounds interesting, what would the process look like? Memory efficient cells enabling denser voxel grids, TSDFs for every depth map within a cell with interpolation on the GPU, then combining cells on the CPU/GPU? |
Beta Was this translation helpful? Give feedback.
-
Does this idea sound plausible? Creating a sparse/semi-dense version of the scene> bounding boxes from something like octree / kdtree> filtering or dividing views of each bounding box to not exceed GPU memory> fusing each scene using the GPU> combining all scenes using the CPU with an option for slicing the points to not exceed memory / point filtering.
Beta Was this translation helpful? Give feedback.
All reactions