-
Notifications
You must be signed in to change notification settings - Fork 5
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add IRRCNN model for classification (#1)
* Add irrcnn model * Create README.md * Update readme * Update README.md * Update README.md Co-authored-by: leoarc <[email protected]>
- Loading branch information
Showing
6 changed files
with
489 additions
and
0 deletions.
There are no files selected for viewing
38 changes: 38 additions & 0 deletions
38
Classification Sample Models/ICIAR18_data_IRRCNN_model/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
# ICIAR 2018 BACH Challenge | ||
|
||
## Dataset | ||
|
||
|
||
Dataset used is available [here](https://iciar2018-challenge.grand-challenge.org/Dataset/) | ||
|
||
The image dataset is composed of high-resolution (2048 x 1536 pixels), uncompressed, and annotated H&E stain images from the ICIAR 2018 BACH Challenge. | ||
|
||
Each image is labeled with one of four classes: i) normal tissue, ii) benign lesion, iii) in situ carcinoma and iv) invasive carcinoma. | ||
|
||
Patch Size used : 128 X 128 | ||
|
||
Image Format : RGB | ||
|
||
Pre-Processing : 128 X 128 patches are cropped from the complete images without any overlap. As there is no seperate test dataset 20% of the extracted patches are kept aside for testing and the rest are used for training. Pixel values are normalized before training. | ||
|
||
Pixel scale: 0.42 µm x 0.42 µm | ||
|
||
Magnification : 200x | ||
|
||
|
||
Sample Images : | ||
|
||
 | ||
## Model Used : | ||
|
||
|
||
The model used is the [IRRCNN](https://arxiv.org/pdf/1811.04241.pdf). It uses Residual connections in addition to the [IRCNN](https://arxiv.org/abs/1704.07709) model which adds the inputs at each step to the feature maps extracted by the IRCNN block after passing them through (1,1) convolutional filters to equate their dimensions . | ||
|
||
Model architecture: | ||
|
||
 | ||
|
||
|
||
### References : | ||
|
||
Alom, Md Zahangir, Mahmudul Hasan, Chris Yakopcic, Tarek M. Taha, and Vijayan K. Asari. "Improved Inception-Residual Convolutional Neural Network for Object Recognition."arXiv preprint arXiv:1712.09888(2017). |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.