Skip to content

Commit

Permalink
cleanup readme
Browse files Browse the repository at this point in the history
  • Loading branch information
manzt committed Jul 19, 2024
1 parent 70f6b50 commit 1c5984c
Showing 1 changed file with 59 additions and 27 deletions.
86 changes: 59 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,79 @@
<h1>
<p align="center">
<img width="400" src="./assets/logo-wide.png" alt="Vizarr">
<img src="./assets/logo-wide.svg" alt="vizarr" width="200">
</h1>
<samp>
<p align="center">
<span>view multiscale zarr images online and in notebooks</span>
<br>
<br>
<a href="https://hms-dbmi.github.io/vizarr/?source=https://minio-dev.openmicroscopy.org/idr/v0.3/idr0062-blin-nuclearsegmentation/6001240.zarr">app</a> .
<a href="./python/notebooks/getting_started.ipynb">getting started</a>
</p>
</samp>
</p>

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hms-dbmi/vizarr/blob/main/python/notebooks/mandelbrot.ipynb)

![Multiscale OME-Zarr in Jupyter Notebook with Vizarr](./assets/screenshot.png)
<center>
<img src="./assets/screenshot.png" alt="Multiscale OME-Zarr in Jupyter Notebook with Vizarr" width="400">
</center>

Vizarr is a minimal, purely client-side program for viewing Zarr-based images. It is built with
[Viv](https://github.com/hms-dbmi/viv) and exposes a Python API using the
[`imjoy-rpc`](https://github.com/imjoy-team/imjoy-rpc), allowing users to programatically view multiplex
and multiscale images from within a Jupyter Notebook. The ImJoy plugin registers a codec for Python
`zarr.Array` and `zarr.Group` objects, enabling Viv to securely request chunks lazily via
[Zarr.js](https://github.com/gzuidhof/zarr.js/). This means that other valid zarr-python
[stores](https://zarr.readthedocs.io/en/stable/api/storage.html) can be viewed remotely with Viv,
enabling flexible workflows when working with large datasets.
**Vizarr** is a minimal, purely client-side program for viewing zarr-based images.

### Remote image registration workflow
We created Vizarr to enhance interactive multimodal image alignment using the
[wsireg](https://github.com/NHPatterson/wsireg) library. We describe a rapid workflow where
comparison of registration methods as well as visual verification of alignnment can be assessed
remotely, leveraging high-performance computational resources for rapid image processing and
Viv for interactive web-based visualization in a laptop computer. The Jupyter Notebook containing
the workflow described in the manuscript can be found in [`multimodal_registration_vizarr.ipynb`](multimodal_registration_vizarr.ipynb). For more information, please read our preprint [doi:10.31219/osf.io/wd2gu](https://doi.org/10.31219/osf.io/wd2gu).

> Note: The data required to run this notebook is too large to include in this repository and can be made avaiable upon request.
-**GPU accelerated rendering** with [Viv](https://github.com/hms-dbmi/viv)
- 💻 Purely **client-side** zarr access with [zarrita.js](https://github.com/manzt/zarrita.js)
- 🌎 A **standalone [web app](https://hms-dbmi/vizarr)** for viewing entirely in the browser.
- 🐍 An [anywidget](https://github.com/manzt/anywidget) **Python API** for
programmatic control in notebooks.
- 📦 Supports any `zarr-python` [store](https://zarr.readthedocs.io/en/stable/api/storage.html)
as a backend.

### Data types
Vizarr supports viewing 2D slices of n-Dimensional Zarr arrays, allowing users to choose
a single channel or blended composites of multiple channels during analysis. It has special support
for the developing [OME-Zarr format](https://github.com/ome/omero-ms-zarr/blob/master/spec.md)
for multiscale and multimodal images. Currently [Viv](https://github.com/hms-dbmi/viv) supports
`i1`, `i2`, `i4`, `u1`, `u2`, `u4`, and `f4` arrays, but contributions are welcome to support more `np.dtypes`!

### Getting started
The easiest way to get started with `vizarr` is to clone this repository and open one of
the example [Jupyter Notebooks](example/).
**Vizarr** supports viewing 2D slices of n-Dimensional Zarr arrays, allowing
users to choose a single channel or blended composites of multiple channels
during analysis. It has special support for the developing OME-NGFF format for
multiscale and multimodal images. Currently, Viv supports `int8`, `int16`,
`int32`, `uint8`, `uint16`, `uint32`, `float32`, `float64` arrays, but
contributions are welcome to support more np.dtypes!

### Getting started

Copy and paste a URL to a Zarr store as the `?source` query parameter in the
**[web app](https://hms-dbmi.github.io/vizarr/)**. For example, to view the
[example data](https://minio-dev.openmicroscopy.org/idr/v0.3/idr0062-blin-nuclearsegmentation/6001240.zarr)
from the IDR, you can use the following URL:

```
https://hms-dbmi.github.io/vizarr/?source=https://minio-dev.openmicroscopy.org/idr/v0.3/idr0062-blin-nuclearsegmentation/6001240.zarr
```

Otherwise you can try out the Python API in a Jupyter Notebook, following [the
examples](./python/notebooks/getting_started.ipynb).

```sh
pip install vizarr
```

```python
import vizarr
import zarr

store = zarr.open("./path/to/ome.zarr")
viewer = vizarr.Viewer()
viewer.add_image(store)
viewer
```

### Limitations

`vizarr` was built to support the registration use case above where multiple, pyramidal OME-Zarr images
are viewed within a Jupyter Notebook. Support for other Zarr arrays is supported but not as well tested.
More information regarding the viewing of generic Zarr arrays can be found in the example notebooks.

### Citation

If you are using Vizarr in your research, please cite our paper:

> Trevor Manz, Ilan Gold, Nathan Heath Patterson, Chuck McCallum, Mark S Keller, Bruce W Herr II, Katy Börner, Jeffrey M Spraggins, Nils Gehlenborg,
Expand Down

0 comments on commit 1c5984c

Please sign in to comment.