-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vizarr: issues about well viewer and resolution #24
Comments
To have a concrete example (within the UZH network & with Fractal authentication): But https://fractal-bvc.mls.uzh.ch/vizarr/?source=https://fractal-bvc.mls.uzh.ch/vizarr/data/shares/prbvc.biovision.uzh/joel_testing/20240723_23_well_plate/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/09 does not appear to load. |
5 MB is the size of a single image (roughly), i.e. a single zarr chunk the full-resolution array has this .zarray: {
"chunks": [
1,
1,
2160,
2560
],
"compressor": {
"blocksize": 0,
"clevel": 5,
"cname": "lz4",
"id": "blosc",
"shuffle": 1
},
"dimension_separator": "/",
"dtype": "<u2",
"fill_value": 0,
"filters": null,
"order": "C",
"shape": [
3,
19,
19440,
20480
],
"zarr_format": 2 which suggests about 7000 chunks loading 4000 files of up to 5M each won't really work, I guess |
This seems to be a resolution issue. If we use a smaller dataset, vizarr does load the well: |
the question then becomes whether vizarr can load a well at a given resolution level |
Ah, it looks like vizarr doesn't have multi-resolution support for wells then. It also doesn't have this on the plate level. But on the plate level, it defaults to loading the lowest resolution. Apparently it defaults to the highest resolution on the well level... That's an interesting choice. On the image level, it dynamically loads the best resolution given the Zoom levels |
Yes, this is confirmed by looking at https://github.com/hms-dbmi/vizarr/blob/main/src/ome.ts: // in loadPlate:
// Lowest resolution is the 'path' of the last 'dataset' from the first multiscales
const { datasets } = imgAttrs.multiscales[0];
const resolution = datasets[datasets.length - 1].path;
// in loadWell
utils.assert(utils.isMultiscales(imgAttrs), "Path for image is not valid.");
let resolution = imgAttrs.multiscales[0].datasets[0].path;
This is not fully clear yet - more on this later. |
Why would it request 400 chunks? We have an array of shape [3, 19, 19440, 20480]. The second dimension is Z. We only load a single Z plane, but all the channels (3) & all the xy (=> 9x8 chunks). Therefore, I'd expect 983 chunks to get loaded => 216 chunks |
When I took the screenshot the application hadn't completed to load the page yet. I was still creating more requests. |
Could be related to these warnings in the console though: |
My hypothesis: Similar to the napari limit for max image size, vizarr has a limit like this through webGL. Instead of downsampling the image (like napari), it just doesn't show anything in that case. Thus, what happens on the well case is:
The texImage2D warning does not show up for smaller wells like the one from Tommaso above |
This is what we should understand better. It clearly does so (as in |
Yes. But that example contains 6 requests for actual chunks, the other 12 are for .zarray and other stuff |
I suspect it's just the current design limitation. e.g. the viewer was made to work on example data by loading lowest res on the plate and highest res per well. I need to explore how it handles the multiple images per well case. But stitching images in a well into one big image is not super typical. We do it, FAIM at FMI does it, the Allen Cell people do it. But many public datasets have images still saved as many images of approximately 2000x2000 |
For the screenshot in #24 (comment), do you know how many requests out of your 445 are loading images? If it's 445 image-loading requests, this is unexpected (although it would be consistent with the 2000M memory use reported there, since 2000M/400~5M). |
I looks like there are about 400 ish requests to zarr chunks. The list has the same initial overhead plus 2 js files at the end, but the rest are zarr chunks. I count 13 non-chunks 432 chunks. As if the 216 chunks are all loaded twice somehow. For reference, I use https://fractal-bvc.mls.uzh.ch/vizarr/?source=https://fractal-bvc.mls.uzh.ch/vizarr/data/shares/prbvc.biovision.uzh/joel_testing/20240723_23_well_plate/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/09 (the MIP version) to avoid confusions with the Z planes |
That's definitely something to understand better! Overall, we have quite some additional information now, so that we can look for a public-dataset example and start some discussion over at vizarr. |
Will be great to look into it with different public datasets indeed! Unfortunately, our larger test dataset here isn't public yet. To summarize issues we've highlighted here:
Things that remain to be tested: |
Related: hms-dbmi/vizarr#76 |
Most relevant part being:
=> I think we see the limits of this design choice :) |
We currently open ome-zarr images, and it works. When we go one level up (ome-zarr well), it does not work.
We should reproduce this issue locally and identify where the problem comes from, and we should open the issue on vizarr once we are able to observe it on some publicly-available dataset.
The text was updated successfully, but these errors were encountered: