-
Notifications
You must be signed in to change notification settings - Fork 2
Multi resolution mesh
There are three cases in which a neuroglancer precomputed mesh is required:
- We have a segmentation volume, and we want to visualize it in 3D via a multi-resolution mesh.
- We have a mesh (in GLB format usually), and we want to convert it to a multi-resolution mesh to be viewed in neuroglancer.
- We have a number of oriented points, and a single mesh is desired to represent them each of them. We copy the mesh to each point, orient it, and then create a multi-resolution mesh representation of all the oriented meshes. The initial input mesh is usually in GLB format.
In all cases, the process is a combination of using the Igneous package, along with some commonly used Python packages like trimesh
and zmesh
. We have included a number of patches to Igneous
in this repository to modify the default behaviour of Igneous
, and to make fixes in some cases. The process is as follows for the different cases:
Most of the mesh generation functions have some common shared parameters. These are:
generate_mesh(max_lod: int = 2, min_mesh_chunk_dim: int = 16, bounding_box_size: tuple[float, float, float] | None = None)
-
max_lod
: The maximum number of levels of detail in the multi-resolution mesh. The created levels of detail are created by iteratively decimating the mesh by half. The default is 2, which produces LOD0 (original), LOD1 (mid-level resolution), LOD2 (low-level resolution). The higher the number, the more levels of detail are created, and the smaller the leaves in the resulting octree. -
min_mesh_chunk_dim
: The minimum size of a side of a chunk in the multi-resolution mesh. This is used to determine the size of the octree leaves. The default is 16, which is a good balance between memory usage, quality, and allowing for many levels of detail. The smaller the chunk size, the more levels of detail can be created, but the more memory is required - and if the chunk gets very small, visible errors are more likely in neuroglancer. -
bounding_box_size
: The size of the bounding box of the mesh in voxels. Not used in case 1, where the mesh is generated from a segmentation. If not provided, the bounding box is autocomputed, but this is usually inaccurate. If not provided, the bounding box should generally be set as hidden in the neuroglancer state JSON.
- We convert the input zarr segmentation volume to a neuroglancer precomputed format (compressed segmentation format) using the
precompute
package. - The precomputed segmentation is then converted to a multi-resolution mesh using our patched version of
Igneous
. The number of desired levels of the multi-resolution mesh can be specified (this may not always be reached, see limitations). - The volume is processed using marching cubes to create a mesh representation of the volume. After this it is encoded using Draco, and further processed to attempt to remove potential artifacts.
The final output is a multi-resolution mesh in the neuroglancer precomputed format. This mesh is attached to the segmentation volume, which is also in the neuroglancer precomputed format. As such, the mesh and segmentation are linked and can be viewed together in neuroglancer in a single segmentation layer. The segmentation layer must have label 1 visible to view the mesh, and the mesh source should not be deactivated.
- The number of levels of detail may not be fully reached in the final multi-resolution mesh. This is related to the size of the input segmentation volume, the minumum size of an output chunk in the multi-resolution mesh, and the number of levels of detail requested.
- Solution: Automatically reduce the minimum size of an output chunk in the multi-resolution mesh to try and match the requested levels of detail. However, it has to be capped at a certain point, because if the octree structure is too fine, visible artifacts may appear in the mesh.
The mesh co-ordinates in this case are in pixel space. Nothing needs to be visible in the 3D viewer for this case.
A segmentation layer with the multi-resolution mesh attached to it. The mesh is in the neuroglancer precomputed format, and the segmentation layer has no image data in it. Neuroglancer supports this. The image source can be deactivated, and the mesh source should not be deactivated. The segmentation layer should have label 1 visible to view the mesh.
The mesh co-ordinates are in angstrom.
- We convert the input GLB mesh to meshes at multiple levels of detail. Each level of detail is decimated from the original mesh using
pyfqmr
to reduce the number of triangles. - For each level of detail, we compute how many triangles would be needed to store a copy of the mesh at each oriented point. This total triangle count is compared against a user provided triangle budget. The highest resolution level that has less triangles than the total triangle budget is selected as the first level of detail.
- Based on the desired number of levels of detail in the final output neuroglancer multi-resolution mesh, we compute the minumum chunk size in the octree structure. The finer the chunk size, the more levels of detail can be achieved, but the more memory and time required.
- At each level of detail, we create a full mesh for the scene by copying the input mesh (or decimated mesh, if at a level of detail above 0) to each point and orienting it.
- Each full scene-mesh for the different levels of detail are then placed into a single octree structure using the chunk size from the previous step, which represents the multi-resolutiono mesh. The octree is encoded using Draco.
The final output is a multi-resolution mesh in the neuroglancer precomputed format. This mesh is attached to a segmentation layer, but the segmentation layer has no image data in it. Neuroglancer supports this. The image source can be deactivated, and the mesh source should not be deactivated. The segmentation layer should have label 1 visible to view the mesh.
- The input mesh is not necessarily used at the resolution level that it is provided at. There are two reasons. First, if the input mesh is very detailed with many faces, the octree calculation is slow and memory consuming as each face is compared against the bounds of the octree nodes. Second, the number of faces is halved at each level of detail. Starting with a very high number of faces can lead to the lowest resolution still being quite intensive.
- If this is not desired, the maximum number of faces used can be set by the user to infinity.
- The bounding box of the mesh that is autocomputed is usually inaccurate. Either the size of the bounding box (tomogram size in voxels) should be provided by the user, or, ideally, the bounding box should be hidden from these kind of "fake" segmentation layers in the neuroglancer state when the JSON state is generated.
- Due to a bug in
Igneous
, passing a resolution level that is not1nm
for the multi-resolution mesh will cause problems. As such, the resolution of the mesh is hardcoded to1nm
. To convert this to the correct display resolution for the tomogram, a conversion must be specified in the JSON state for these "fake" segmentation layers. This can be performed by thestate_generator
module, by passing a scale parameter matching the actual resolution inm
.