You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/deep/instanseg.md
+52-28
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,10 @@
2
2
3
3
(instanseg-extension)=
4
4
5
-
The [InstanSeg QuPath extension](https://github.com/qupath/qupath-extension-instanseg) provides a new and improved way to perform segmentation of both cells and nuclei in QuPath using deep learning. It uses pre-trained models using the original [InstanSeg code](https://github.com/instanseg/instanseg).
5
+
The [InstanSeg QuPath extension](https://github.com/qupath/qupath-extension-instanseg) provides a new and improved way to perform segmentation of both cells and nuclei in QuPath using deep learning.
6
+
It uses pre-trained models using the original [InstanSeg code](https://github.com/instanseg/instanseg).
6
7
7
-
Developed by Thibaut Goldsborough and other members of the [QuPath group](https://institute-genetics-cancer.ed.ac.uk/research/research-groups-a-z/peter-bankhead-research-group) at the University of Edinburgh.
8
+
Developed by the [QuPath group](https://institute-genetics-cancer.ed.ac.uk/research/research-groups-a-z/peter-bankhead-research-group) at the University of Edinburgh.
8
9
9
10
:::{admonition} Cite the paper!
10
11
:class: warning
@@ -39,67 +40,78 @@ If you have an NVIDIA GPU and want to use it with InstanSeg, you will need to in
39
40
InstanSeg can be dragged and dropped into QuPath when its running. After this, restart QuPath and you should see the extension within the Extensions menu {menuselection}`Extensions --> InstanSeg`.
40
41
The InstanSeg dialog will appear as shown below:
41
42
42
-
:::{figure} images/instanseg.png
43
+
:::{figure} images/instanseg.jpg
43
44
:class: shadow-image small-image
44
45
45
46
The InstanSeg user interface
46
47
:::
47
48
48
49
:::{note}
49
-
Please note that you'll need internet access to download models and PyTorch if required.
50
+
Please note that you'll need internet access to download models and PyTorch, if required.
50
51
:::
51
52
52
53
## Using InstanSeg
53
54
54
55
### 1. Choose directory to store models
55
56
57
+
InstanSeg uses deep learning models (neural networks) to detect objects in images.
58
+
To do this, it needs to download models from the internet (or it can use locally-stored models, more on that later).
56
59
Click on the folder icon to choose a directory either containing models you already have, or just where you would like to save future downloaded ones.
57
60
58
61
### 2. Select a model
59
62
60
-
Select the dropdown box to see the available models. Options with a cloud icon will need to be downloaded, as they are not local to your machine. To do this, select the model and click the download button to fetch them. If you have local models in your directory, you can also select these from the dropdown box. Be sure to select the relevant model for the type of image you are working with (unless you are being experimental!). The model being used here is for brightfield images and is being used on a haematoxylin and DAB stain, but it could be used on images captured using other stains.
63
+
Select the dropdown box to see the available models.
64
+
Options with a cloud icon will need to be downloaded, as they are not local to your machine.
65
+
To do this, select the model and click the download button to fetch them.
66
+
If you have local models in your directory, you can also select these from the dropdown box.
67
+
68
+
You should select a suitable model for the type of image you are working with.
69
+
For example, for brightfield images we would usually use the brightfield_nuclei model that was trained on images with haematoxylin and DAB stain, but this model can also be used on images captured using other stains.
61
70
62
71
:::{note}
63
72
Please note that internet in needed to download the models.
64
73
:::
65
74
66
75
### 3. Create or select an annotation
67
76
68
-
Create an annotation or select pre-existing annotations/tiles/TMA cores you wish to run the model on. It's recommended to keep the annotation smaller if this is the first time running InstanSeg. This allows you to test the processing speed before running it on a larger region. This might take some time, depending on your computer's processing speed and whether you're using a GPU.
77
+
Create an annotation or select pre-existing annotations/tiles/TMA cores you wish to run the model on.
78
+
It's recommended to keep the annotation smaller if this is the first time running InstanSeg.
79
+
This allows you to test the processing speed before running it on a larger region.
80
+
This might take some time, depending on your computer's processing speed and whether you're using a GPU.
69
81
70
82
### 4. Run the model
71
83
72
-
When you click `Run`, InstanSeg will check for PyTorch. If this is not on your machine it will download it for you (this may well be > 100 MB, so may take a while). Once this is done, the model will run and you will see the results in the viewer.
84
+
When you click `Run`, InstanSeg will check for PyTorch.
85
+
If this is not on your machine it will download it for you (this could be > 100 MB, so may take a while).
86
+
Once this is done, the model will run and you will see the results in the viewer.
73
87
74
-
:::{figure} images/instanseg_running.png
88
+
:::{figure} images/instanseg_running.jpg
75
89
:class: shadow-image large-image
76
90
77
91
Running InstanSeg
78
92
:::
79
93
80
94
### 5. Viewing Results
81
95
82
-
The results will be displayed in the viewer. The visibility of detections can be turned on or off using the show/hide detection objects button in the toolbar. Additionally, using the fill/unfill detection objects button and the opacity slider in the toolbar can help distinguish the cells.
96
+
The results will be displayed in the viewer.
97
+
The visibility of detections can be turned on or off using the show/hide detection objects button in the toolbar.
98
+
Additionally, using the fill/unfill detection objects button and the opacity slider in the toolbar can help distinguish the cells.
83
99
84
-
:::{figure} images/instanseg_bf_results.png
100
+
:::{figure} images/instanseg_bf_results.jpg
85
101
:class: shadow-image large-image
86
102
87
103
The results of running InstanSeg on a brightfield image.
88
104
89
105
:::
90
106
91
-
:::{figure} images/instanseg_fl_results.png
92
-
:class: shadow-image large-image
93
-
94
-
The results of running InstanSeg on a fluorescence image.
95
-
96
107
:::
97
108
98
109
## Additional Options
99
110
100
-
InstanSeg has quite a few options to adapt to your device and preferences. These can be seen below:
111
+
InstanSeg has quite a few options to adapt to your device and preferences.
112
+
These can be seen below:
101
113
102
-
:::{figure} images/instanseg_options.png
114
+
:::{figure} images/instanseg_options.jpg
103
115
:class: shadow-image small-image
104
116
105
117
The additional options available in InstanSeg
@@ -110,50 +122,62 @@ The additional options available in InstanSeg
110
122
The options available will depend upon your computer's capabilities (at least as far as they could be discerned by Deep Java Library):
111
123
112
124
-**CPU**: This is generally the safest - and slowest - option, because it should be supported on all computers.
113
-
-**MPS**: This stands for *Metal Performance Shaders*, and should be available on recent Apple silicon - it is the Mac version of GPU acceleration
125
+
-**MPS**: This stands for *Metal Performance Shaders*, and should be available on recent Macs - it is the Mac version of GPU acceleration
114
126
-**GPU**: This should appear if you have an NVIDIA GPU, CUDA... and a little bit of luck.
115
127
116
-
If either MPS or GPU work for you, they should reduce the time required for inference by a *lot*. However configuration for GPU can be tricky, as it will depend upon other hardware and software on your computer - CUDA in particular. For more info, see {doc}`gpu`.
128
+
If either MPS or GPU work for you, they should reduce the time required for inference by a *lot*.
129
+
However configuration for GPU can be tricky, as it will depend upon other hardware and software on your computer - CUDA in particular.
130
+
For more info, see {doc}`gpu`.
117
131
118
132
:::{admonition} PyTorch & CUDA versions
119
133
:class: tip
120
134
121
-
The InstanSeg extension uses Deep Java Library to manage its PyTorch installation. It won't automatically find any existing PyTorch you might have installed: Deep Java Library will download its own.
135
+
The InstanSeg extension uses Deep Java Library to manage a PyTorch installation.
136
+
It won't automatically find any existing PyTorch you might have installed: Deep Java Library will download its own.
122
137
123
138
If you have a compatible GPU, and want CUDA support, you'll need to ensure you have an appropriate CUDA installed *before* PyTorch is downloaded.
124
139
:::
125
140
126
141
### Threads
127
142
128
-
This is the number of CPU/GPU threads to use to fetch and submit image tiles. 1 is usually to little and high numbers may not be beneficial. We suggest between 2-4 hence why this is the default.
143
+
This is the number of CPU/GPU threads to use to fetch and submit image tiles.
144
+
1 is usually too few, and high numbers may not be beneficial.
145
+
We suggest between 2-4 hence why this is the default.
129
146
130
147
### Tile Size
131
148
132
-
Large annotations are broken up into lots of tiles to be processed as doing it all at once may cause memory issues. Usually 512 or 1024 pixels is a good size.
149
+
Large annotations are broken up into lots of tiles to be processed as doing it all at once may cause memory issues.
150
+
Usually 512 or 1024 pixels is a good size.
133
151
134
152
### Tile Padding
135
153
136
-
When the tiles are created, they overlap each other by a certain amount to ensure that cells are not clipped between the boundaries. Tile padding allows you to choose how much to overlap by with small padding being faster but more likely to result in clipping (this may result in many cells with unnatural vertical or horizontal edges). If this occurs, then increase the value and run again.
154
+
When the tiles are created, they overlap each other by several pixels to ensure that cells are not clipped between the boundaries.
155
+
Tile padding allows you to choose how much to overlap by with small padding being faster but more likely to result in clipping; this may result in many cells with unnatural vertical or horizontal edges.
156
+
If this occurs, then increase the value and run again.
137
157
138
158
### Input Channels
139
159
140
-
The number of channels be used by the model. Some models have a fixed number and others can take an arbitrary number of inputs. It's possible to use a fluorescence model on a brightfield image by using color deconvolution "stains" as channels; however, results may vary.
160
+
The number of channels be used by the model. Some models require a fixed number of channels and others can take an arbitrary number of inputs.
161
+
It's possible to use a fluorescence model on a brightfield image by using color deconvolution "stains" as channels, although this is only likely to work well if the stains can be separated very cleanly.
162
+
Even if so, results may be disappointing as these are not the types of images the model was trained on.
141
163
142
164
### Output
143
165
144
-
This determines whether the model outputs nuclei, cell membranes, or both. Some models allow for both, but others are specific to one or the other.
166
+
This determines whether the model outputs nuclei, cell membranes, or both.
167
+
Some models allow for both, but others are specific to one or the other.
145
168
146
169
### Make measurements
147
170
148
-
Following detection, QuPath can add common measurements of shape and intensity for each nuclei/cell; this option controls whether that's done automatically.
171
+
Following detection, QuPath can add common measurements of shape and intensity for each nucleus/cell; this option controls whether that's done automatically.
149
172
150
173
### Random colors
151
174
152
175
This will assign a random color to each detection object to help distinguish them, which can be useful for distinguishing between neighbouring objects.
153
176
154
177
## Scripting
155
178
156
-
If you want to use InstanSeg in a script, you can use either use the [workflows to scripts](https://qupath.readthedocs.io/en/stable/docs/scripting/workflows_to_scripts.html) method if you have already run a model. Alternatively, you can use the following script as a template:
179
+
If you want to use InstanSeg in a script, you can use either use the [workflows to scripts](https://qupath.readthedocs.io/en/stable/docs/scripting/workflows_to_scripts.html) method if you have already run a model.
180
+
Alternatively, you can use the following script as a template:
0 commit comments