You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I recently discovered spatial-image and multiscale-spatial-image and find it super useful to have an xarray based representation of NGFF datasets (and multiscale images for that matter!).
Exploring how to best use this approach, I came across the following observation. Consider the following example:
('/', '/scale0', '/scale1', '/scale2')
[########################################] | 100% Completed | 104.17 ms
[########################################] | 100% Completed | 105.15 ms
[########################################] | 100% Completed | 115.48 ms
When computing (or writing) a MultiscaleSpatialImage, the scale datasets are computed sequentially and independently, as indicated in the above output by the presence of three separate progress bars. And indeed if I understand correctly, xarray datatree implements the compute method by calling it independently on each tree node. This means that although the dask arrays of higher scales depend on the lowest scale, each scale is recomputed from scratch each time (caching is probably not relevant in the context of large enough data).
In my current understanding this leads to redundant computations, especially relevant for large/expensive underlying dask graphs/computations. It seems more desirable to obtain the downsampled data directly from the full resolution data, as chunks become computed.
Is there currently a good way to obtain this behaviour for MultiscaleSpatialImages?
I guess in principle one would want to bundle the computation of all tree nodes into a single call to dask.compute, so that the scheduler can take care of optimizations. A reasonable alternative to the above exemplified workflow would probably be to stream a SpatialImage to file first, and then stream this into a MultiscaleSpatialImage. However, this of course involves an additional i/o step.
Please excuse the lengthy issue and thanks a lot for any feedback / discussion :)
The text was updated successfully, but these errors were encountered:
m-albert
changed the title
Computing MultiscaleSpatialImage
Computing/storing a MultiscaleSpatialImage with dask array data
Aug 7, 2023
Yes, we can save redundant computation if we know we want to compute all the content. By serializing, e.g. to zarr, by serializing the first scale, then base the dask task graph for the second scale starting from the first, etc.
Furthermore, for very large datasets, we only want to create task graphs per region at higher resolutions.
There is work in progress on these fronts in ngff-zarr to be ported to multiscale-spatial-image.
Hi, I recently discovered
spatial-image
andmultiscale-spatial-image
and find it super useful to have an xarray based representation of NGFF datasets (and multiscale images for that matter!).Exploring how to best use this approach, I came across the following observation. Consider the following example:
When computing (or writing) a
MultiscaleSpatialImage
, the scale datasets are computed sequentially and independently, as indicated in the above output by the presence of three separate progress bars. And indeed if I understand correctly, xarray datatree implements thecompute
method by calling it independently on each tree node. This means that although the dask arrays of higher scales depend on the lowest scale, each scale is recomputed from scratch each time (caching is probably not relevant in the context of large enough data).In my current understanding this leads to redundant computations, especially relevant for large/expensive underlying dask graphs/computations. It seems more desirable to obtain the downsampled data directly from the full resolution data, as chunks become computed.
Is there currently a good way to obtain this behaviour for
MultiscaleSpatialImage
s?I guess in principle one would want to bundle the computation of all tree nodes into a single call to dask.compute, so that the scheduler can take care of optimizations. A reasonable alternative to the above exemplified workflow would probably be to stream a
SpatialImage
to file first, and then stream this into aMultiscaleSpatialImage
. However, this of course involves an additional i/o step.Please excuse the lengthy issue and thanks a lot for any feedback / discussion :)
The text was updated successfully, but these errors were encountered: