Skip to content

Commit 70c4e79

Browse files
Small jpegs?
1 parent 1f511f0 commit 70c4e79

11 files changed

+52
-28
lines changed

docs/deep/images/instanseg.jpg

21 KB
Loading

docs/deep/images/instanseg.png

-19.8 KB
Binary file not shown.
310 KB
Loading
-1.64 MB
Binary file not shown.
337 KB
Loading
-1.45 MB
Binary file not shown.
34.3 KB
Loading
-33.3 KB
Binary file not shown.
213 KB
Loading
-1.46 MB
Binary file not shown.

docs/deep/instanseg.md

+52-28
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,10 @@
22

33
(instanseg-extension)=
44

5-
The [InstanSeg QuPath extension](https://github.com/qupath/qupath-extension-instanseg) provides a new and improved way to perform segmentation of both cells and nuclei in QuPath using deep learning. It uses pre-trained models using the original [InstanSeg code](https://github.com/instanseg/instanseg).
5+
The [InstanSeg QuPath extension](https://github.com/qupath/qupath-extension-instanseg) provides a new and improved way to perform segmentation of both cells and nuclei in QuPath using deep learning.
6+
It uses pre-trained models using the original [InstanSeg code](https://github.com/instanseg/instanseg).
67

7-
Developed by Thibaut Goldsborough and other members of the [QuPath group](https://institute-genetics-cancer.ed.ac.uk/research/research-groups-a-z/peter-bankhead-research-group) at the University of Edinburgh.
8+
Developed by the [QuPath group](https://institute-genetics-cancer.ed.ac.uk/research/research-groups-a-z/peter-bankhead-research-group) at the University of Edinburgh.
89

910
:::{admonition} Cite the paper!
1011
:class: warning
@@ -39,67 +40,78 @@ If you have an NVIDIA GPU and want to use it with InstanSeg, you will need to in
3940
InstanSeg can be dragged and dropped into QuPath when its running. After this, restart QuPath and you should see the extension within the Extensions menu {menuselection}`Extensions --> InstanSeg`.
4041
The InstanSeg dialog will appear as shown below:
4142

42-
:::{figure} images/instanseg.png
43+
:::{figure} images/instanseg.jpg
4344
:class: shadow-image small-image
4445

4546
The InstanSeg user interface
4647
:::
4748

4849
:::{note}
49-
Please note that you'll need internet access to download models and PyTorch if required.
50+
Please note that you'll need internet access to download models and PyTorch, if required.
5051
:::
5152

5253
## Using InstanSeg
5354

5455
### 1. Choose directory to store models
5556

57+
InstanSeg uses deep learning models (neural networks) to detect objects in images.
58+
To do this, it needs to download models from the internet (or it can use locally-stored models, more on that later).
5659
Click on the folder icon to choose a directory either containing models you already have, or just where you would like to save future downloaded ones.
5760

5861
### 2. Select a model
5962

60-
Select the dropdown box to see the available models. Options with a cloud icon will need to be downloaded, as they are not local to your machine. To do this, select the model and click the download button to fetch them. If you have local models in your directory, you can also select these from the dropdown box. Be sure to select the relevant model for the type of image you are working with (unless you are being experimental!). The model being used here is for brightfield images and is being used on a haematoxylin and DAB stain, but it could be used on images captured using other stains.
63+
Select the dropdown box to see the available models.
64+
Options with a cloud icon will need to be downloaded, as they are not local to your machine.
65+
To do this, select the model and click the download button to fetch them.
66+
If you have local models in your directory, you can also select these from the dropdown box.
67+
68+
You should select a suitable model for the type of image you are working with.
69+
For example, for brightfield images we would usually use the brightfield_nuclei model that was trained on images with haematoxylin and DAB stain, but this model can also be used on images captured using other stains.
6170

6271
:::{note}
6372
Please note that internet in needed to download the models.
6473
:::
6574

6675
### 3. Create or select an annotation
6776

68-
Create an annotation or select pre-existing annotations/tiles/TMA cores you wish to run the model on. It's recommended to keep the annotation smaller if this is the first time running InstanSeg. This allows you to test the processing speed before running it on a larger region. This might take some time, depending on your computer's processing speed and whether you're using a GPU.
77+
Create an annotation or select pre-existing annotations/tiles/TMA cores you wish to run the model on.
78+
It's recommended to keep the annotation smaller if this is the first time running InstanSeg.
79+
This allows you to test the processing speed before running it on a larger region.
80+
This might take some time, depending on your computer's processing speed and whether you're using a GPU.
6981

7082
### 4. Run the model
7183

72-
When you click `Run`, InstanSeg will check for PyTorch. If this is not on your machine it will download it for you (this may well be > 100 MB, so may take a while). Once this is done, the model will run and you will see the results in the viewer.
84+
When you click `Run`, InstanSeg will check for PyTorch.
85+
If this is not on your machine it will download it for you (this could be > 100 MB, so may take a while).
86+
Once this is done, the model will run and you will see the results in the viewer.
7387

74-
:::{figure} images/instanseg_running.png
88+
:::{figure} images/instanseg_running.jpg
7589
:class: shadow-image large-image
7690

7791
Running InstanSeg
7892
:::
7993

8094
### 5. Viewing Results
8195

82-
The results will be displayed in the viewer. The visibility of detections can be turned on or off using the show/hide detection objects button in the toolbar. Additionally, using the fill/unfill detection objects button and the opacity slider in the toolbar can help distinguish the cells.
96+
The results will be displayed in the viewer.
97+
The visibility of detections can be turned on or off using the show/hide detection objects button in the toolbar.
98+
Additionally, using the fill/unfill detection objects button and the opacity slider in the toolbar can help distinguish the cells.
8399

84-
:::{figure} images/instanseg_bf_results.png
100+
:::{figure} images/instanseg_bf_results.jpg
85101
:class: shadow-image large-image
86102

87103
The results of running InstanSeg on a brightfield image.
88104

89105
:::
90106

91-
:::{figure} images/instanseg_fl_results.png
92-
:class: shadow-image large-image
93-
94-
The results of running InstanSeg on a fluorescence image.
95-
96107
:::
97108

98109
## Additional Options
99110

100-
InstanSeg has quite a few options to adapt to your device and preferences. These can be seen below:
111+
InstanSeg has quite a few options to adapt to your device and preferences.
112+
These can be seen below:
101113

102-
:::{figure} images/instanseg_options.png
114+
:::{figure} images/instanseg_options.jpg
103115
:class: shadow-image small-image
104116

105117
The additional options available in InstanSeg
@@ -110,50 +122,62 @@ The additional options available in InstanSeg
110122
The options available will depend upon your computer's capabilities (at least as far as they could be discerned by Deep Java Library):
111123

112124
- **CPU**: This is generally the safest - and slowest - option, because it should be supported on all computers.
113-
- **MPS**: This stands for *Metal Performance Shaders*, and should be available on recent Apple silicon - it is the Mac version of GPU acceleration
125+
- **MPS**: This stands for *Metal Performance Shaders*, and should be available on recent Macs - it is the Mac version of GPU acceleration
114126
- **GPU**: This should appear if you have an NVIDIA GPU, CUDA... and a little bit of luck.
115127

116-
If either MPS or GPU work for you, they should reduce the time required for inference by a *lot*. However configuration for GPU can be tricky, as it will depend upon other hardware and software on your computer - CUDA in particular. For more info, see {doc}`gpu`.
128+
If either MPS or GPU work for you, they should reduce the time required for inference by a *lot*.
129+
However configuration for GPU can be tricky, as it will depend upon other hardware and software on your computer - CUDA in particular.
130+
For more info, see {doc}`gpu`.
117131

118132
:::{admonition} PyTorch & CUDA versions
119133
:class: tip
120134

121-
The InstanSeg extension uses Deep Java Library to manage its PyTorch installation. It won't automatically find any existing PyTorch you might have installed: Deep Java Library will download its own.
135+
The InstanSeg extension uses Deep Java Library to manage a PyTorch installation.
136+
It won't automatically find any existing PyTorch you might have installed: Deep Java Library will download its own.
122137

123138
If you have a compatible GPU, and want CUDA support, you'll need to ensure you have an appropriate CUDA installed *before* PyTorch is downloaded.
124139
:::
125140

126141
### Threads
127142

128-
This is the number of CPU/GPU threads to use to fetch and submit image tiles. 1 is usually to little and high numbers may not be beneficial. We suggest between 2-4 hence why this is the default.
143+
This is the number of CPU/GPU threads to use to fetch and submit image tiles.
144+
1 is usually too few, and high numbers may not be beneficial.
145+
We suggest between 2-4 hence why this is the default.
129146

130147
### Tile Size
131148

132-
Large annotations are broken up into lots of tiles to be processed as doing it all at once may cause memory issues. Usually 512 or 1024 pixels is a good size.
149+
Large annotations are broken up into lots of tiles to be processed as doing it all at once may cause memory issues.
150+
Usually 512 or 1024 pixels is a good size.
133151

134152
### Tile Padding
135153

136-
When the tiles are created, they overlap each other by a certain amount to ensure that cells are not clipped between the boundaries. Tile padding allows you to choose how much to overlap by with small padding being faster but more likely to result in clipping (this may result in many cells with unnatural vertical or horizontal edges). If this occurs, then increase the value and run again.
154+
When the tiles are created, they overlap each other by several pixels to ensure that cells are not clipped between the boundaries.
155+
Tile padding allows you to choose how much to overlap by with small padding being faster but more likely to result in clipping; this may result in many cells with unnatural vertical or horizontal edges.
156+
If this occurs, then increase the value and run again.
137157

138158
### Input Channels
139159

140-
The number of channels be used by the model. Some models have a fixed number and others can take an arbitrary number of inputs. It's possible to use a fluorescence model on a brightfield image by using color deconvolution "stains" as channels; however, results may vary.
160+
The number of channels be used by the model. Some models require a fixed number of channels and others can take an arbitrary number of inputs.
161+
It's possible to use a fluorescence model on a brightfield image by using color deconvolution "stains" as channels, although this is only likely to work well if the stains can be separated very cleanly.
162+
Even if so, results may be disappointing as these are not the types of images the model was trained on.
141163

142164
### Output
143165

144-
This determines whether the model outputs nuclei, cell membranes, or both. Some models allow for both, but others are specific to one or the other.
166+
This determines whether the model outputs nuclei, cell membranes, or both.
167+
Some models allow for both, but others are specific to one or the other.
145168

146169
### Make measurements
147170

148-
Following detection, QuPath can add common measurements of shape and intensity for each nuclei/cell; this option controls whether that's done automatically.
171+
Following detection, QuPath can add common measurements of shape and intensity for each nucleus/cell; this option controls whether that's done automatically.
149172

150173
### Random colors
151174

152175
This will assign a random color to each detection object to help distinguish them, which can be useful for distinguishing between neighbouring objects.
153176

154177
## Scripting
155178

156-
If you want to use InstanSeg in a script, you can use either use the [workflows to scripts](https://qupath.readthedocs.io/en/stable/docs/scripting/workflows_to_scripts.html) method if you have already run a model. Alternatively, you can use the following script as a template:
179+
If you want to use InstanSeg in a script, you can use either use the [workflows to scripts](https://qupath.readthedocs.io/en/stable/docs/scripting/workflows_to_scripts.html) method if you have already run a model.
180+
Alternatively, you can use the following script as a template:
157181

158182
```groovy
159183
qupath.ext.instanseg.core.InstanSeg.builder()
@@ -170,4 +194,4 @@ qupath.ext.instanseg.core.InstanSeg.builder()
170194
If you use this extension in any published work, we ask you to please cite
171195

172196
1. At least one of the two InstanSeg preprints above (whichever is most relevant)
173-
2. The main QuPath paper - details [here](https://qupath.readthedocs.io/en/stable/docs/intro/citing.html)
197+
2. The main QuPath paper - details [on the citation page for QuPath](https://qupath.readthedocs.io/en/stable/docs/intro/citing.html)

0 commit comments

Comments
 (0)