Skip to content

Commit

Permalink
Merge pull request #12 from HiLab-git/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
taigw authored Feb 26, 2023
2 parents 0f83eb3 + 211421d commit 484d25b
Show file tree
Hide file tree
Showing 41 changed files with 463 additions and 159 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@
[PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions.

## Install PyMIC
The latest released version of PyMIC can be installed by:
The released version of PyMIC (v0.4.0) is required for these examples, and it can be installed by:

```bash
pip install PYMIC==0.3.1.1
pip install PYMIC==0.4.0
```

To use the latest development version, you can download the source code [here][PyMIC_link], and install it by:
Expand All @@ -15,7 +15,7 @@ python setup.py install
```

## Data
The datasets for the examples can be downloaded from [Google Drive][google_link] or [Baidu Disk][baidu_link] (extraction code: n07g). Extract the files to `PyMIC_data` after the download.
The datasets for the examples can be downloaded from [Google Drive][google_link] or [Baidu Disk][baidu_link] (extraction code: xlwg). Extract the files to `PyMIC_data` after downloading.


## List of Examples
Expand All @@ -35,8 +35,8 @@ Currently we provide the following examples in this repository:
|Noisy label learning|[seg_nll/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|

[PyMIC_link]: https://github.com/HiLab-git/PyMIC
[google_link]:https://drive.google.com/file/d/1-LrMHsX7ZdBto2iC1WnbFFZ0tDeJQFHy/view?usp=sharing
[baidu_link]:https://pan.baidu.com/s/15mjc0QqH75xztmc23PPWQQ
[google_link]:https://drive.google.com/file/d/1eZakSEBr_zfIHFTAc96OFJix8cUBf-KR/view?usp=sharing
[baidu_link]:https://pan.baidu.com/s/1tN0inIrVYtSxTVRfErD9Bw
[AntBee_link]:classification/AntBee
[CHNCXR_link]:classification/CHNCXR
[JSRT_link]:segmentation/JSRT
Expand Down
4 changes: 2 additions & 2 deletions classification/AntBee/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ update_mode = all
Then start to train by running:

```bash
pymic_run train config/train_test_ce1.cfg
pymic_train config/train_test_ce1.cfg
```

2. During training or after training, run `tensorboard --logdir model/resnet18_ce1` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 400, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18_ce1`.
Expand All @@ -39,7 +39,7 @@ pymic_run train config/train_test_ce1.cfg

```bash
mkdir result
pymic_run test config/train_test_ce1.cfg
pymic_test config/train_test_ce1.cfg
```

2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.
Expand Down
3 changes: 2 additions & 1 deletion classification/AntBee/config/train_test_ce1.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -77,5 +77,6 @@ gpus = [0]

# checkpoint mode can be [0-latest, 1-best, 2-specified]
ckpt_mode = 1
output_csv = result/resnet18_ce1.csv
output_dir = result
output_csv = resnet18_ce1.csv
save_probability = True
3 changes: 2 additions & 1 deletion classification/AntBee/config/train_test_ce2.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -77,5 +77,6 @@ gpus = [0]

# checkpoint mode can be [0-latest, 1-best, 2-specified]
ckpt_mode = 1
output_csv = result/resnet18_ce2.csv
output_dir = result
output_csv = resnet18_ce2.csv
save_probability = True
4 changes: 2 additions & 2 deletions classification/CHNCXR/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ update_mode = all
Start to train by running:

```bash
pymic_run train config/net_resnet18.cfg
pymic_train config/net_resnet18.cfg
```

2. During training or after training, run `tensorboard --logdir model/resnet18` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 1800, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18`.
Expand All @@ -39,7 +39,7 @@ pymic_run train config/net_resnet18.cfg

```bash
mkdir result
pymic_run test config/net_resnet18.cfg
pymic_test config/net_resnet18.cfg
```

2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.
Expand Down
3 changes: 2 additions & 1 deletion classification/CHNCXR/config/net_resnet18.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -75,5 +75,6 @@ gpus = [0]

# checkpoint mode can be [0-latest, 1-best, 2-specified]
ckpt_mode = 1
output_csv = result/resnet18.csv
output_dir = result
output_csv = resnet18.csv
save_probability = True
3 changes: 2 additions & 1 deletion classification/CHNCXR/config/net_vgg16.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -75,5 +75,6 @@ gpus = [0]

# checkpoint mode can be [0-latest, 1-best, 2-specified]
ckpt_mode = 1
output_csv = result/vgg16.csv
output_dir = result
output_csv = vgg16.csv
save_probability = True
55 changes: 38 additions & 17 deletions seg_nll/JSRT/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,10 @@ The dataset setting is similar to that in the `segmentation/JSRT` demo. See `con

```bash
...
task_type = seg
tensor_type = float
task_type = seg
supervise_type = fully_sup

root_dir = ../../PyMIC_data/JSRT
train_csv = config/data/jsrt_train_mix.csv
valid_csv = config/data/jsrt_valid.csv
Expand All @@ -51,8 +54,8 @@ loss_type = CrossEntropyLoss
The following commands are used for training and inference with this method, respectively:

```bash
pymic_run train config/unet_ce.cfg
pymic_run test config/unet_ce.cfg
pymic_train config/unet_ce.cfg
pymic_test config/unet_ce.cfg
```

### GCE Loss
Expand All @@ -67,8 +70,8 @@ loss_type = GeneralizedCELoss
The following commands are used for training and inference with this method, respectively:

```bash
pymic_run train config/unet_gce.cfg
pymic_run test config/unet_gce.cfg
pymic_train config/unet_gce.cfg
pymic_test config/unet_gce.cfg
```

### CLSLSR
Expand All @@ -81,33 +84,45 @@ python clslsr_get_condience config/unet_ce.cfg
The weight maps will be saved in `$root_dir/slsr_conf`. Then train the new model and do inference by:

```bash
pymic_run train config/unet_clslsr.cfg
pymic_run test config/unet_clslsr.cfg
pymic_train config/unet_clslsr.cfg
pymic_test config/unet_clslsr.cfg
```

Note that the weight maps for training images are specified in the configuration file `train_csv = config/data/jsrt_train_mix_clslsr.csv`.

### Co-Teaching
The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. The corresponding setting is:
The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. Note that for the following methods, `supervise_type` should be set to `noisy_label`.

```bash
nll_method = CoTeaching
[dataset]
...
supervise_type = noisy_label
...

[noisy_label_learning]
method_name = CoTeaching
co_teaching_select_ratio = 0.8
rampup_start = 1000
rampup_end = 8000
```

The following commands are used for training and inference with this method, respectively:
```bash
pymic_nll train config/unet_cot.cfg
pymic_nll test config/unet_cot.cfg
pymic_train config/unet_cot.cfg
pymic_test config/unet_cot.cfg
```

### TriNet
The configuration file for TriNet is `config/unet_trinet.cfg`. The corresponding setting is:

```bash
nll_method = TriNet
[dataset]
...
supervise_type = noisy_label
...

[noisy_label_learning]
method_name = TriNet
trinet_select_ratio = 0.9
rampup_start = 1000
rampup_end = 8000
Expand All @@ -116,15 +131,21 @@ rampup_end = 8000
The following commands are used for training and inference with this method, respectively:

```bash
pymic_nll train config/unet_trinet.cfg
pymic_nll test config/unet_trinet.cfg
pymic_train config/unet_trinet.cfg
pymic_test config/unet_trinet.cfg
```

### DAST
The configuration file for DAST is `config/unet_dast.cfg`. The corresponding setting is:

```bash
nll_method = DAST
[dataset]
...
supervise_type = noisy_label
...

[noisy_label_learning]
method_name = DAST
dast_dbc_w = 0.1
dast_st_w = 0.1
dast_rank_length = 20
Expand All @@ -136,8 +157,8 @@ rampup_end = 8000
The commands for training and inference are:

```bash
pymic_nll train config/unet_dast.cfg
pymic_run test config/unet_dast.cfg
pymic_train config/unet_dast.cfg
pymic_test config/unet_dast.cfg
```

## Evaluation
Expand Down
7 changes: 3 additions & 4 deletions seg_nll/JSRT/config/unet_ce.cfg
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
[dataset]
# tensor type (float or double)
tensor_type = float
tensor_type = float
task_type = seg
supervise_type = fully_sup

task_type = seg
root_dir = ../../PyMIC_data/JSRT
train_csv = config/data/jsrt_train_mix.csv
valid_csv = config/data/jsrt_valid.csv
Expand Down Expand Up @@ -64,8 +65,6 @@ ReduceLROnPlateau_patience = 2000
ckpt_save_dir = model/unet_ce
ckpt_prefix = unet_ce

# start iter
iter_start = 0
iter_max = 10000
iter_valid = 100
iter_save = [10000]
Expand Down
7 changes: 3 additions & 4 deletions seg_nll/JSRT/config/unet_clslsr.cfg
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
[dataset]
# tensor type (float or double)
tensor_type = float
tensor_type = float
task_type = seg
supervise_type = fully_sup

task_type = seg
root_dir = ../../PyMIC_data/JSRT
train_csv = config/data/jsrt_train_mix_clslsr.csv
valid_csv = config/data/jsrt_valid.csv
Expand Down Expand Up @@ -65,8 +66,6 @@ early_stop_patience = 4000
ckpt_save_dir = model/unet_clslsr
ckpt_prefix = unet_clslsr

# start iter
iter_start = 0
iter_max = 10000
iter_valid = 100
iter_save = [10000]
Expand Down
7 changes: 4 additions & 3 deletions seg_nll/JSRT/config/unet_cot.cfg
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
[dataset]
# tensor type (float or double)
tensor_type = float
tensor_type = float
task_type = seg
supervise_type = noisy_label

task_type = seg
root_dir = ../../PyMIC_data/JSRT
train_csv = config/data/jsrt_train_mix.csv
valid_csv = config/data/jsrt_valid.csv
Expand Down Expand Up @@ -68,7 +69,7 @@ iter_valid = 100
iter_save = [10000]

[noisy_label_learning]
nll_method = CoTeaching
method_name = CoTeaching
co_teaching_select_ratio = 0.8
rampup_start = 1000
rampup_end = 8000
Expand Down
7 changes: 4 additions & 3 deletions seg_nll/JSRT/config/unet_dast.cfg
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
[dataset]
# tensor type (float or double)
tensor_type = float
tensor_type = float
task_type = seg
supervise_type = noisy_label

task_type = seg
root_dir = ../../PyMIC_data/JSRT
train_csv = config/data/jsrt_train_clean.csv
train_csv_noise = config/data/jsrt_train_noise.csv
Expand Down Expand Up @@ -70,7 +71,7 @@ iter_valid = 100
iter_save = [10000]

[noisy_label_learning]
nll_method = DAST
method_name = DAST
dast_dbc_w = 0.1
dast_st_w = 0.1
dast_rank_length = 20
Expand Down
7 changes: 3 additions & 4 deletions seg_nll/JSRT/config/unet_gce.cfg
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
[dataset]
# tensor type (float or double)
tensor_type = float
tensor_type = float
task_type = seg
supervise_type = fully_sup

task_type = seg
root_dir = ../../PyMIC_data/JSRT
train_csv = config/data/jsrt_train_mix.csv
valid_csv = config/data/jsrt_valid.csv
Expand Down Expand Up @@ -64,8 +65,6 @@ ReduceLROnPlateau_patience = 2000
ckpt_save_dir = model/unet_gce
ckpt_prefix = unet_gce

# start iter
iter_start = 0
iter_max = 10000
iter_valid = 100
iter_save = [10000]
Expand Down
9 changes: 4 additions & 5 deletions seg_nll/JSRT/config/unet_trinet.cfg
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
[dataset]
# tensor type (float or double)
tensor_type = float
tensor_type = float
task_type = seg
supervise_type = noisy_label

task_type = seg
root_dir = ../../PyMIC_data/JSRT
train_csv = config/data/jsrt_train_mix.csv
valid_csv = config/data/jsrt_valid.csv
Expand Down Expand Up @@ -64,14 +65,12 @@ ReduceLROnPlateau_patience = 2000
ckpt_save_dir = model/unet_trinet
ckpt_prefix = trinet

# start iter
iter_start = 0
iter_max = 10000
iter_valid = 100
iter_save = [10000]

[noisy_label_learning]
nll_method = TriNet
method_name = TriNet
trinet_select_ratio = 0.9
rampup_start = 1000
rampup_end = 8000
Expand Down
Loading

0 comments on commit 484d25b

Please sign in to comment.