From 9fb9ed834334e56bf86c22aa4c4a579e9fabf864 Mon Sep 17 00:00:00 2001 From: taigw Date: Thu, 23 Feb 2023 11:27:32 +0800 Subject: [PATCH 1/4] update configuration files add supervise_type --- README.md | 6 +- seg_nll/JSRT/README.md | 55 ++-- seg_nll/JSRT/config/unet_ce.cfg | 7 +- seg_nll/JSRT/config/unet_clslsr.cfg | 7 +- seg_nll/JSRT/config/unet_cot.cfg | 7 +- seg_nll/JSRT/config/unet_dast.cfg | 7 +- seg_nll/JSRT/config/unet_gce.cfg | 7 +- seg_nll/JSRT/config/unet_trinet.cfg | 9 +- seg_ssl/ACDC/README.md | 49 ++-- seg_ssl/ACDC/config/unet2d_baseline.cfg | 3 +- seg_ssl/ACDC/config/unet2d_cct.cfg | 5 +- seg_ssl/ACDC/config/unet2d_cps.cfg | 7 +- seg_ssl/ACDC/config/unet2d_em.cfg | 7 +- seg_ssl/ACDC/config/unet2d_mt.cfg | 10 +- seg_ssl/ACDC/config/unet2d_uamt.cfg | 7 +- seg_ssl/ACDC/config/unet2d_urpc.cfg | 7 +- seg_ssl/AtriaSeg/README.md | 235 ++++++++++++++++++ .../AtriaSeg/config/unet3d_r10_baseline.cfg | 7 +- seg_ssl/AtriaSeg/config/unet3d_r10_cps.cfg | 9 +- seg_ssl/AtriaSeg/config/unet3d_r10_em.cfg | 7 +- seg_ssl/AtriaSeg/config/unet3d_r10_uamt.cfg | 7 +- seg_ssl/AtriaSeg/config/unet3d_r10_urpc.cfg | 7 +- seg_wsl/ACDC/README.md | 63 +++-- seg_wsl/ACDC/config/unet2d_baseline.cfg | 5 +- seg_wsl/ACDC/config/unet2d_dmpls.cfg | 7 +- seg_wsl/ACDC/config/unet2d_em.cfg | 7 +- seg_wsl/ACDC/config/unet2d_gcrf.cfg | 7 +- seg_wsl/ACDC/config/unet2d_mumford.cfg | 7 +- seg_wsl/ACDC/config/unet2d_tv.cfg | 7 +- seg_wsl/ACDC/config/unet2d_ustm.cfg | 7 +- 30 files changed, 441 insertions(+), 141 deletions(-) create mode 100644 seg_ssl/AtriaSeg/README.md diff --git a/README.md b/README.md index 98f7b29..84a7f25 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ python setup.py install ``` ## Data -The datasets for the examples can be downloaded from [Google Drive][google_link] or [Baidu Disk][baidu_link] (extraction code: n07g). Extract the files to `PyMIC_data` after the download. +The datasets for the examples can be downloaded from [Google Drive][google_link] or [Baidu Disk][baidu_link] (extraction code: xlwg). Extract the files to `PyMIC_data` after downloading. ## List of Examples @@ -35,8 +35,8 @@ Currently we provide the following examples in this repository: |Noisy label learning|[seg_nll/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels| [PyMIC_link]: https://github.com/HiLab-git/PyMIC -[google_link]:https://drive.google.com/file/d/1-LrMHsX7ZdBto2iC1WnbFFZ0tDeJQFHy/view?usp=sharing -[baidu_link]:https://pan.baidu.com/s/15mjc0QqH75xztmc23PPWQQ +[google_link]:https://drive.google.com/file/d/1eZakSEBr_zfIHFTAc96OFJix8cUBf-KR/view?usp=sharing +[baidu_link]:https://pan.baidu.com/s/1tN0inIrVYtSxTVRfErD9Bw [AntBee_link]:classification/AntBee [CHNCXR_link]:classification/CHNCXR [JSRT_link]:segmentation/JSRT diff --git a/seg_nll/JSRT/README.md b/seg_nll/JSRT/README.md index 9cf0c1b..1916625 100644 --- a/seg_nll/JSRT/README.md +++ b/seg_nll/JSRT/README.md @@ -39,7 +39,10 @@ The dataset setting is similar to that in the `segmentation/JSRT` demo. See `con ```bash ... -task_type = seg +tensor_type = float +task_type = seg +supervise_type = fully_sup + root_dir = ../../PyMIC_data/JSRT train_csv = config/data/jsrt_train_mix.csv valid_csv = config/data/jsrt_valid.csv @@ -51,8 +54,8 @@ loss_type = CrossEntropyLoss The following commands are used for training and inference with this method, respectively: ```bash -pymic_run train config/unet_ce.cfg -pymic_run test config/unet_ce.cfg +pymic_train config/unet_ce.cfg +pymic_test config/unet_ce.cfg ``` ### GCE Loss @@ -67,8 +70,8 @@ loss_type = GeneralizedCELoss The following commands are used for training and inference with this method, respectively: ```bash -pymic_run train config/unet_gce.cfg -pymic_run test config/unet_gce.cfg +pymic_train config/unet_gce.cfg +pymic_test config/unet_gce.cfg ``` ### CLSLSR @@ -81,17 +84,23 @@ python clslsr_get_condience config/unet_ce.cfg The weight maps will be saved in `$root_dir/slsr_conf`. Then train the new model and do inference by: ```bash -pymic_run train config/unet_clslsr.cfg -pymic_run test config/unet_clslsr.cfg +pymic_train config/unet_clslsr.cfg +pymic_test config/unet_clslsr.cfg ``` Note that the weight maps for training images are specified in the configuration file `train_csv = config/data/jsrt_train_mix_clslsr.csv`. ### Co-Teaching -The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. The corresponding setting is: +The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. Note that for the following methods, `supervise_type` should be set to `noisy_label`. ```bash -nll_method = CoTeaching +[dataset] +... +supervise_type = noisy_label +... + +[noisy_label_learning] +method_name = CoTeaching co_teaching_select_ratio = 0.8 rampup_start = 1000 rampup_end = 8000 @@ -99,15 +108,21 @@ rampup_end = 8000 The following commands are used for training and inference with this method, respectively: ```bash -pymic_nll train config/unet_cot.cfg -pymic_nll test config/unet_cot.cfg +pymic_train config/unet_cot.cfg +pymic_test config/unet_cot.cfg ``` ### TriNet The configuration file for TriNet is `config/unet_trinet.cfg`. The corresponding setting is: ```bash -nll_method = TriNet +[dataset] +... +supervise_type = noisy_label +... + +[noisy_label_learning] +method_name = TriNet trinet_select_ratio = 0.9 rampup_start = 1000 rampup_end = 8000 @@ -116,15 +131,21 @@ rampup_end = 8000 The following commands are used for training and inference with this method, respectively: ```bash -pymic_nll train config/unet_trinet.cfg -pymic_nll test config/unet_trinet.cfg +pymic_train config/unet_trinet.cfg +pymic_test config/unet_trinet.cfg ``` ### DAST The configuration file for DAST is `config/unet_dast.cfg`. The corresponding setting is: ```bash -nll_method = DAST +[dataset] +... +supervise_type = noisy_label +... + +[noisy_label_learning] +method_name = DAST dast_dbc_w = 0.1 dast_st_w = 0.1 dast_rank_length = 20 @@ -136,8 +157,8 @@ rampup_end = 8000 The commands for training and inference are: ```bash -pymic_nll train config/unet_dast.cfg -pymic_run test config/unet_dast.cfg +pymic_train config/unet_dast.cfg +pymic_test config/unet_dast.cfg ``` ## Evaluation diff --git a/seg_nll/JSRT/config/unet_ce.cfg b/seg_nll/JSRT/config/unet_ce.cfg index 785fa25..b56fbe9 100644 --- a/seg_nll/JSRT/config/unet_ce.cfg +++ b/seg_nll/JSRT/config/unet_ce.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = fully_sup -task_type = seg root_dir = ../../PyMIC_data/JSRT train_csv = config/data/jsrt_train_mix.csv valid_csv = config/data/jsrt_valid.csv @@ -64,8 +65,6 @@ ReduceLROnPlateau_patience = 2000 ckpt_save_dir = model/unet_ce ckpt_prefix = unet_ce -# start iter -iter_start = 0 iter_max = 10000 iter_valid = 100 iter_save = [10000] diff --git a/seg_nll/JSRT/config/unet_clslsr.cfg b/seg_nll/JSRT/config/unet_clslsr.cfg index 74a6654..b46748c 100644 --- a/seg_nll/JSRT/config/unet_clslsr.cfg +++ b/seg_nll/JSRT/config/unet_clslsr.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = fully_sup -task_type = seg root_dir = ../../PyMIC_data/JSRT train_csv = config/data/jsrt_train_mix_clslsr.csv valid_csv = config/data/jsrt_valid.csv @@ -65,8 +66,6 @@ early_stop_patience = 4000 ckpt_save_dir = model/unet_clslsr ckpt_prefix = unet_clslsr -# start iter -iter_start = 0 iter_max = 10000 iter_valid = 100 iter_save = [10000] diff --git a/seg_nll/JSRT/config/unet_cot.cfg b/seg_nll/JSRT/config/unet_cot.cfg index ba9fb60..3e7a79b 100644 --- a/seg_nll/JSRT/config/unet_cot.cfg +++ b/seg_nll/JSRT/config/unet_cot.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = noisy_label -task_type = seg root_dir = ../../PyMIC_data/JSRT train_csv = config/data/jsrt_train_mix.csv valid_csv = config/data/jsrt_valid.csv @@ -68,7 +69,7 @@ iter_valid = 100 iter_save = [10000] [noisy_label_learning] -nll_method = CoTeaching +method_name = CoTeaching co_teaching_select_ratio = 0.8 rampup_start = 1000 rampup_end = 8000 diff --git a/seg_nll/JSRT/config/unet_dast.cfg b/seg_nll/JSRT/config/unet_dast.cfg index 606262e..f159a42 100644 --- a/seg_nll/JSRT/config/unet_dast.cfg +++ b/seg_nll/JSRT/config/unet_dast.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = noisy_label -task_type = seg root_dir = ../../PyMIC_data/JSRT train_csv = config/data/jsrt_train_clean.csv train_csv_noise = config/data/jsrt_train_noise.csv @@ -70,7 +71,7 @@ iter_valid = 100 iter_save = [10000] [noisy_label_learning] -nll_method = DAST +method_name = DAST dast_dbc_w = 0.1 dast_st_w = 0.1 dast_rank_length = 20 diff --git a/seg_nll/JSRT/config/unet_gce.cfg b/seg_nll/JSRT/config/unet_gce.cfg index 5a1145b..aef1c96 100644 --- a/seg_nll/JSRT/config/unet_gce.cfg +++ b/seg_nll/JSRT/config/unet_gce.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = fully_sup -task_type = seg root_dir = ../../PyMIC_data/JSRT train_csv = config/data/jsrt_train_mix.csv valid_csv = config/data/jsrt_valid.csv @@ -64,8 +65,6 @@ ReduceLROnPlateau_patience = 2000 ckpt_save_dir = model/unet_gce ckpt_prefix = unet_gce -# start iter -iter_start = 0 iter_max = 10000 iter_valid = 100 iter_save = [10000] diff --git a/seg_nll/JSRT/config/unet_trinet.cfg b/seg_nll/JSRT/config/unet_trinet.cfg index 7f0b9eb..9d31d9b 100644 --- a/seg_nll/JSRT/config/unet_trinet.cfg +++ b/seg_nll/JSRT/config/unet_trinet.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = noisy_label -task_type = seg root_dir = ../../PyMIC_data/JSRT train_csv = config/data/jsrt_train_mix.csv valid_csv = config/data/jsrt_valid.csv @@ -64,14 +65,12 @@ ReduceLROnPlateau_patience = 2000 ckpt_save_dir = model/unet_trinet ckpt_prefix = trinet -# start iter -iter_start = 0 iter_max = 10000 iter_valid = 100 iter_save = [10000] [noisy_label_learning] -nll_method = TriNet +method_name = TriNet trinet_select_ratio = 0.9 rampup_start = 1000 rampup_end = 8000 diff --git a/seg_ssl/ACDC/README.md b/seg_ssl/ACDC/README.md index cd0d7bf..6fbd1b7 100644 --- a/seg_ssl/ACDC/README.md +++ b/seg_ssl/ACDC/README.md @@ -33,8 +33,9 @@ In this demo, we experiment with five methods: EM, UAMT, UPRC, CCT and CPS, and The baseline method uses the 14 annotated cases for training. The batch size is 4, and the patch size is 6x192x192. Therefore, indeed there are 16 2D slices in each batch. See `config/unet2d_baseline.cfg` for details about the configuration. The dataset configuration is: ```bash -tensor_type = float -task_type = seg +tensor_type = float +task_type = seg +supervise_type = fully_sup root_dir = ../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv valid_csv = config/data/image_valid.csv @@ -129,14 +130,19 @@ sliding_window_stride = [6, 192, 192] The following commands are used for training and inference with this method, respectively: ```bash -pymic_run train config/unet2d_baseline.cfg -pymic_run test config/unet2d_baseline.cfg +pymic_train config/unet2d_baseline.cfg +pymic_test config/unet2d_baseline.cfg ``` ### Data configuration for semi-supervised learning For semi-supervised learning, we set the batch size as 8, where 4 are annotated images and the other 4 are unannotated images. ```bash +tensor_type = float +task_type = seg +supervise_type = semi_sup + +root_dir = ../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv valid_csv = config/data/image_valid.csv @@ -150,7 +156,8 @@ train_batch_size_unlab = 4 The configuration file for Entropy Minimization is `config/unet2d_em.cfg`. The data configuration has been described above, and the settings for data augmentation, network, optmizer, learning rate scheduler and inference are the same as those in the baseline method. Specific setting for Entropy Minimization is: ```bash -ssl_method = EntropyMinimization +[semi_supervised_learning] +method_name = EntropyMinimization regularize_w = 0.1 rampup_start = 1000 rampup_end = 20000 @@ -160,15 +167,16 @@ where the weight of the regularization loss is 0.1, and rampup is used to gradua The following commands are used for training and inference with this method, respectively: ```bash -pymic_ssl train config/unet2d_em.cfg -pymic_run test config/unet2d_em.cfg +pymic_train config/unet2d_em.cfg +pymic_test config/unet2d_em.cfg ``` ### UAMT The configuration file for UAMT is `config/unet2d_uamt.cfg`. The corresponding setting is: ```bash -ssl_method = UAMT +[semi_supervised_learning] +method_name = UAMT regularize_w = 0.1 ema_decay = 0.99 rampup_start = 1000 @@ -177,8 +185,8 @@ rampup_end = 20000 The following commands are used for training and inference with this method, respectively: ```bash -pymic_ssl train config/unet2d_uamt.cfg -pymic_run test config/unet2d_uamt.cfg +pymic_train config/unet2d_uamt.cfg +pymic_test config/unet2d_uamt.cfg ``` ### UPRC @@ -200,7 +208,8 @@ deep_supervise = True The setting for URPC training is: ```bash -ssl_method = URPC +[semi_supervised_learning] +method_name = URPC regularize_w = 0.1 rampup_start = 1000 rampup_end = 20000 @@ -208,8 +217,8 @@ rampup_end = 20000 The following commands are used for training and inference with this method, respectively: ```bash -pymic_ssl train config/unet2d_urpc.cfg -pymic_run test config/unet2d_urpc.cfg +pymic_train config/unet2d_urpc.cfg +pymic_test config/unet2d_urpc.cfg ``` ### CCT @@ -233,7 +242,8 @@ Uniform_range = 0.3 The setting for CCT training is: ```bash -ssl_method = CCT +[semi_supervised_learning] +method_name = CCT regularize_w = 0.1 rampup_start = 1000 rampup_end = 20000 @@ -243,15 +253,16 @@ unsupervised_loss = MSE The following commands are used for training and inference with this method, respectively: ```bash -pymic_ssl train config/unet2d_cct.cfg -pymic_run test config/unet2d_cct.cfg +pymic_train config/unet2d_cct.cfg +pymic_test config/unet2d_cct.cfg ``` ### CPS The configuration file for CPS is `config/unet2d_cps.cfg`, and the corresponding setting is: ```bash -ssl_method = CPS +[semi_supervised_learning] +method_name = CPS regularize_w = 0.1 rampup_start = 1000 rampup_end = 20000 @@ -260,8 +271,8 @@ rampup_end = 20000 The training and inference commands are: ```bash -pymic_ssl train config/unet2d_cps.cfg -pymic_ssl test config/unet2d_cps.cfg +pymic_train config/unet2d_cps.cfg +pymic_test config/unet2d_cps.cfg ``` ## Evaluation diff --git a/seg_ssl/ACDC/config/unet2d_baseline.cfg b/seg_ssl/ACDC/config/unet2d_baseline.cfg index 8ed0af5..9af7cc9 100644 --- a/seg_ssl/ACDC/config/unet2d_baseline.cfg +++ b/seg_ssl/ACDC/config/unet2d_baseline.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) tensor_type = float - task_type = seg +supervise_type = fully_sup + root_dir = ../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv valid_csv = config/data/image_valid.csv diff --git a/seg_ssl/ACDC/config/unet2d_cct.cfg b/seg_ssl/ACDC/config/unet2d_cct.cfg index 6946f0b..6e2d69d 100644 --- a/seg_ssl/ACDC/config/unet2d_cct.cfg +++ b/seg_ssl/ACDC/config/unet2d_cct.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) tensor_type = float - task_type = seg +supervise_type = semi_sup + root_dir =../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -92,7 +93,7 @@ iter_valid = 100 iter_save = [1000, 30000] [semi_supervised_learning] -ssl_method = CCT +method_name = CCT regularize_w = 0.1 rampup_start = 1000 rampup_end = 20000 diff --git a/seg_ssl/ACDC/config/unet2d_cps.cfg b/seg_ssl/ACDC/config/unet2d_cps.cfg index f528a77..def2146 100644 --- a/seg_ssl/ACDC/config/unet2d_cps.cfg +++ b/seg_ssl/ACDC/config/unet2d_cps.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -86,7 +87,7 @@ iter_valid = 100 iter_save = [1000, 30000] [semi_supervised_learning] -ssl_method = CPS +method_name = CPS regularize_w = 0.1 rampup_start = 1000 rampup_end = 20000 diff --git a/seg_ssl/ACDC/config/unet2d_em.cfg b/seg_ssl/ACDC/config/unet2d_em.cfg index 3ced5bb..dc742ef 100644 --- a/seg_ssl/ACDC/config/unet2d_em.cfg +++ b/seg_ssl/ACDC/config/unet2d_em.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -86,7 +87,7 @@ iter_valid = 100 iter_save = [1000, 30000] [semi_supervised_learning] -ssl_method = EntropyMinimization +method_name = EntropyMinimization regularize_w = 0.1 rampup_start = 1000 rampup_end = 20000 diff --git a/seg_ssl/ACDC/config/unet2d_mt.cfg b/seg_ssl/ACDC/config/unet2d_mt.cfg index 8a769f3..1b1903c 100644 --- a/seg_ssl/ACDC/config/unet2d_mt.cfg +++ b/seg_ssl/ACDC/config/unet2d_mt.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -82,15 +83,12 @@ early_stop_patience = 10000 ckpt_save_dir = model/unet2d_mt -# start iter -# start iter -iter_start = 0 iter_max = 30000 iter_valid = 100 iter_save = [1000, 30000] [semi_supervised_learning] -ssl_method = MeanTeacher +method_name = MeanTeacher regularize_w = 0.1 ema_decay = 0.99 rampup_start = 1000 diff --git a/seg_ssl/ACDC/config/unet2d_uamt.cfg b/seg_ssl/ACDC/config/unet2d_uamt.cfg index ae77f4f..b145675 100644 --- a/seg_ssl/ACDC/config/unet2d_uamt.cfg +++ b/seg_ssl/ACDC/config/unet2d_uamt.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -86,7 +87,7 @@ iter_valid = 100 iter_save = [1000,30000] [semi_supervised_learning] -ssl_method = UAMT +method_name = UAMT regularize_w = 0.1 ema_decay = 0.99 rampup_start = 1000 diff --git a/seg_ssl/ACDC/config/unet2d_urpc.cfg b/seg_ssl/ACDC/config/unet2d_urpc.cfg index 4a56b61..1f2a255 100644 --- a/seg_ssl/ACDC/config/unet2d_urpc.cfg +++ b/seg_ssl/ACDC/config/unet2d_urpc.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -88,7 +89,7 @@ iter_valid = 100 iter_save = [1000, 30000] [semi_supervised_learning] -ssl_method = URPC +method_name = URPC regularize_w = 0.1 rampup_start = 1000 rampup_end = 20000 diff --git a/seg_ssl/AtriaSeg/README.md b/seg_ssl/AtriaSeg/README.md new file mode 100644 index 0000000..2b9c710 --- /dev/null +++ b/seg_ssl/AtriaSeg/README.md @@ -0,0 +1,235 @@ +# Semi-Supervised Left Atrial Segmentation using 3D CNNs + +In this example, we show the following four semi-supervised learning methods for segmentation of left atrial from 3D medical images. All these methods use UNet3D as the backbone network. +|PyMIC Method|Reference|Remarks| +|---|---|---| +|SSLEntropyMinimization|[Grandvalet et al.][em_paper], NeurIPS 2005| Oringinally proposed for classification| +|SSLUAMT| [Yu et al.][uamt_paper], MICCAI 2019| Uncertainty-aware mean teacher| +|SSLURPC| [Luo et al.][urpc_paper], MedIA 2022| Uncertainty rectified pyramid consistency| +|SSLCPS| [Chen et al.][cps_paper], CVPR 2021| Cross-consistency training| +For a full list of available semi-supervised methods, see the [document][all_ssl_link]. + +[em_paper]:https://papers.nips.cc/paper/2004/file/96f2b50b5d3613adf9c27049b2a888c7-Paper.pdf +[mt_paper]:https://arxiv.org/abs/1703.01780 +[uamt_paper]:https://arxiv.org/abs/1907.07034 +[urpc_paper]:https://doi.org/10.1016/j.media.2022.102517 +[cct_paper]:https://arxiv.org/abs/2003.09005 +[cps_paper]:https://arxiv.org/abs/2106.01226 +[all_ssl_link]:https://pymic.readthedocs.io/en/latest/usage.ssl.html + + +## Data +The [Left Atrial][atrial_link] dataset is used in this demo, which is from the 2018 Atrial Segmentation Challenge. The original dataset contains 100 cases for training and 54 cases for testing. As the official testing data are not publicly available, here we split the other 100 cases into 72 for training, 8 for validation and 20 for testing. The images are available in `PyMIC_data/AtriaSeg`. We have preprocessed the images by cropping the volume with an output size of 88 x 160 x 256. For semi-supervised learning, we set the annotation ratio to 10%, i.e., 7 images are annotated and the other 65 images are unannotated in the training set. + +[atrial_link]:http://atriaseg2018.cardiacatlas.org/ + +### Baseline Method +The baseline method uses the 7 annotated cases for training. The batch size is 2, and the patch size is 72x96x112. See `config/unet3d_r10_baseline.cfg` for details about the configuration. The dataset configuration is: + +```bash +tensor_type = float +task_type = seg +supervise_type = fully_sup + +root_dir = ../../PyMIC_data/AtriaSeg/TrainingSet_crop/ +train_csv = config/data/image_train_r10_lab.csv +valid_csv = config/data/image_valid.csv +test_csv = config/data/image_test.csv +train_batch_size = 2 +``` + +For data augmentation, we use random crop, random flip, gamma correction and gaussian noise. The cropped images are also normaized with mean and std. The details for data transforms are: + +```bash +train_transform = [RandomCrop, RandomFlip, NormalizeWithMeanStd, GammaCorrection, GaussianNoise, LabelToProbability] +valid_transform = [NormalizeWithMeanStd, LabelToProbability] +test_transform = [NormalizeWithMeanStd] + +RandomCrop_output_size = [72, 96, 112] +RandomCrop_foreground_focus = False +RandomCrop_foreground_ratio = None +Randomcrop_mask_label = None + +RandomFlip_flip_depth = True +RandomFlip_flip_height = True +RandomFlip_flip_width = True + +NormalizeWithMeanStd_channels = [0] + +GammaCorrection_channels = [0] +GammaCorrection_gamma_min = 0.7 +GammaCorrection_gamma_max = 1.5 + +GaussianNoise_channels = [0] +GaussianNoise_mean = 0 +GaussianNoise_std = 0.05 +GaussianNoise_probability = 0.5 +``` + +The configuration of 3D UNet is: + +```bash +net_type = UNet3D +class_num = 2 +in_chns = 1 +feature_chns = [32, 64, 128, 256] +dropout = [0.0, 0.0, 0.5, 0.5] +trilinear = True +multiscale_pred = False +``` + +For training, we use a combinatin of DiceLoss and CrossEntropyLoss, and train the network by the `Adam` optimizer. The maximal iteration is 20k, and the training is early stopped if there is not performance improvement on the validation set for 5k iteratins. The learning rate scheduler is `ReduceLROnPlateau`. The corresponding configuration is: +```bash +gpus = [0] +loss_type = [DiceLoss, CrossEntropyLoss] +loss_weight = [0.5, 0.5] + +optimizer = Adam +learning_rate = 1e-3 +momentum = 0.9 +weight_decay = 1e-5 + +lr_scheduler = ReduceLROnPlateau +lr_gamma = 0.5 +ReduceLROnPlateau_patience = 3000 +early_stop_patience = 5000 + +ckpt_save_dir = model/unet3d_baseline + +iter_max = 20000 +iter_valid = 100 +iter_save = 20000 +``` + +During inference, we send the entire input volume to the network, and do not use postprocess. The configuration is: +```bash +# checkpoint mode can be [0-latest, 1-best, 2-specified] +ckpt_mode = 1 +output_dir = result/unet3d_baseline +post_process = None +sliding_window_enable = False +``` + +The following commands are used for training and inference with this method, respectively: + +```bash +pymic_train config/unet3d_r10_baseline.cfg +pymic_test config/unet3d_r10_baseline.cfg +``` + +### Data configuration for semi-supervised learning +For semi-supervised learning, we set the batch size as 4, where 2 are annotated images and the other 2 are unannotated images. + +```bash +tensor_type = float +task_type = seg +supervise_type = semi_sup + +root_dir = ../../PyMIC_data/AtriaSeg/TrainingSet_crop/ +train_csv = config/data/image_train_r10_lab.csv +train_csv_unlab = config/data/image_train_r10_unlab.csv +valid_csv = config/data/image_valid.csv +test_csv = config/data/image_test.csv + +train_batch_size = 2 +train_batch_size_unlab = 2 +``` + +### Entropy Minimization +The configuration file for Entropy Minimization is `config/unet3d_r10_em.cfg`. The data configuration has been described above, and the settings for data augmentation, network, optmizer, learning rate scheduler and inference are the same as those in the baseline method. Specific setting for Entropy Minimization is: + +```bash +[semi_supervised_learning] +method_name = EntropyMinimization +regularize_w = 0.1 +rampup_start = 1000 +rampup_end = 15000 +``` + +where the weight of the regularization loss is 0.1, and rampup is used to gradually increase it from 0 to 0.1. +The following commands are used for training and inference with this method, respectively: + +```bash +pymic_train config/unet3d_r10_em.cfg +pymic_test config/unet3d_r10_em.cfg +``` + +### UAMT +The configuration file for UAMT is `config/unet3d_r10_uamt.cfg`. The corresponding setting is: + +```bash +[semi_supervised_learning] +method_name = UAMT +regularize_w = 0.1 +ema_decay = 0.99 +rampup_start = 1000 +rampup_end = 15000 +``` + +The following commands are used for training and inference with this method, respectively: +```bash +pymic_train config/unet3d_r10_uamt.cfg +pymic_test config/unet3d_r10_uamt.cfg +``` + +### UPRC +The configuration file for UPRC is `config/unet3d_r10_urpc.cfg`. This method requires deep supervision and pyramid prediction of a network. The network setting is: + +```bash +[network] +net_type = UNet3D +class_num = 2 +in_chns = 1 +feature_chns = [32, 64, 128, 256] +dropout = [0.0, 0.0, 0.5, 0.5] +trilinear = True +multiscale_pred = True +[training] +deep_supervise = True +``` + +The setting for URPC training is: + +```bash +[semi_supervised_learning] +method_name = URPC +regularize_w = 0.1 +rampup_start = 1000 +rampup_end = 15000 +``` + +The following commands are used for training and inference with this method, respectively: +```bash +pymic_train config/unet3d_r10_urpc.cfg +pymic_test config/unet3d_r10_urpc.cfg +``` + +### CPS +The configuration file for CPS is `config/unet3d_r10_cps.cfg`, and the corresponding setting is: + +```bash +[semi_supervised_learning] +method_name = CPS +regularize_w = 0.1 +rampup_start = 1000 +rampup_end = 15000 +``` + +The training and inference commands are: + +```bash +pymic_train config/unet3d_r10_cps.cfg +pymic_test config/unet3d_r10_cps.cfg +``` + +## Evaluation +Use `pymic_eval_seg config/evaluation.cfg` for quantitative evaluation of the segmentation results. You need to edit `config/evaluation.cfg` first, for example: + +```bash +metric = [dice, assd] +label_list = [1] +organ_name = atrial +ground_truth_folder_root = ../../PyMIC_data/AtriaSeg/TrainingSet_crop/ +segmentation_folder_root = result/unet3d_r10_baseline +evaluation_image_pair = config/data/image_test_gt_seg.csv +``` \ No newline at end of file diff --git a/seg_ssl/AtriaSeg/config/unet3d_r10_baseline.cfg b/seg_ssl/AtriaSeg/config/unet3d_r10_baseline.cfg index 87de27b..0a35e5f 100644 --- a/seg_ssl/AtriaSeg/config/unet3d_r10_baseline.cfg +++ b/seg_ssl/AtriaSeg/config/unet3d_r10_baseline.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = fully_sup -task_type = seg root_dir = ../../PyMIC_data/AtriaSeg/TrainingSet_crop/ train_csv = config/data/image_train_r10_lab.csv valid_csv = config/data/image_valid.csv @@ -82,6 +83,6 @@ gpus = [0] # checkpoint mode can be [0-latest, 1-best, 2-specified] ckpt_mode = 1 -output_dir = result/unet3d_baseline +output_dir = result/unet3d_r10_baseline post_process = None sliding_window_enable = False \ No newline at end of file diff --git a/seg_ssl/AtriaSeg/config/unet3d_r10_cps.cfg b/seg_ssl/AtriaSeg/config/unet3d_r10_cps.cfg index decc066..8c6fba4 100644 --- a/seg_ssl/AtriaSeg/config/unet3d_r10_cps.cfg +++ b/seg_ssl/AtriaSeg/config/unet3d_r10_cps.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/AtriaSeg/TrainingSet_crop/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -74,14 +75,12 @@ early_stop_patience = 5000 ckpt_save_dir = model/unet3d_r10_cps -# start iter -iter_start = 0 iter_max = 20000 iter_valid = 100 iter_save = [1000, 20000] [semi_supervised_learning] -ssl_method = CPS +method_name = CPS regularize_w = 0.1 rampup_start = 1000 rampup_end = 15000 diff --git a/seg_ssl/AtriaSeg/config/unet3d_r10_em.cfg b/seg_ssl/AtriaSeg/config/unet3d_r10_em.cfg index 3627431..0ff78c4 100644 --- a/seg_ssl/AtriaSeg/config/unet3d_r10_em.cfg +++ b/seg_ssl/AtriaSeg/config/unet3d_r10_em.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/AtriaSeg/TrainingSet_crop/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -75,7 +76,7 @@ iter_valid = 100 iter_save = [1000, 20000] [semi_supervised_learning] -ssl_method = EntropyMinimization +method_name = EntropyMinimization regularize_w = 0.1 rampup_start = 1000 rampup_end = 15000 diff --git a/seg_ssl/AtriaSeg/config/unet3d_r10_uamt.cfg b/seg_ssl/AtriaSeg/config/unet3d_r10_uamt.cfg index fe4f74b..a50d03f 100644 --- a/seg_ssl/AtriaSeg/config/unet3d_r10_uamt.cfg +++ b/seg_ssl/AtriaSeg/config/unet3d_r10_uamt.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/AtriaSeg/TrainingSet_crop/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -74,7 +75,7 @@ iter_valid = 100 iter_save = [1000,20000] [semi_supervised_learning] -ssl_method = UAMT +method_name = UAMT regularize_w = 0.1 ema_decay = 0.99 rampup_start = 1000 diff --git a/seg_ssl/AtriaSeg/config/unet3d_r10_urpc.cfg b/seg_ssl/AtriaSeg/config/unet3d_r10_urpc.cfg index 6d0d508..fec3f07 100644 --- a/seg_ssl/AtriaSeg/config/unet3d_r10_urpc.cfg +++ b/seg_ssl/AtriaSeg/config/unet3d_r10_urpc.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = semi_sup -task_type = seg root_dir = ../../PyMIC_data/AtriaSeg/TrainingSet_crop/ train_csv = config/data/image_train_r10_lab.csv train_csv_unlab = config/data/image_train_r10_unlab.csv @@ -75,7 +76,7 @@ iter_valid = 100 iter_save = [1000, 20000] [semi_supervised_learning] -ssl_method = URPC +method_name = URPC regularize_w = 0.1 rampup_start = 1000 rampup_end = 15000 diff --git a/seg_wsl/ACDC/README.md b/seg_wsl/ACDC/README.md index 9ef40d7..0a1d74d 100644 --- a/seg_wsl/ACDC/README.md +++ b/seg_wsl/ACDC/README.md @@ -32,8 +32,9 @@ In this demo, we experiment with five methods: EM, TV, GatedCRF, USTM and DMPLS, The dataset setting is similar to that in the `seg_ssl/ACDC` demo. Here we use a slightly different setting of data transform: ```bash -tensor_type = float -task_type = seg +tensor_type = float +task_type = seg +supervise_type = fully_sup root_dir = ../../PyMIC_data/ACDC/preprocess train_csv = config/data/image_train.csv valid_csv = config/data/image_valid.csv @@ -42,8 +43,8 @@ train_batch_size = 4 # data transforms train_transform = [Pad, RandomCrop, RandomFlip, NormalizeWithMeanStd, PartialLabelToProbability] -valid_transform = [NormalizeWithMeanStd, Pad, LabelToProbability] -test_transform = [NormalizeWithMeanStd, Pad] +valid_transform = [NormalizeWithMeanStd, Pad, LabelToProbability] +test_transform = [NormalizeWithMeanStd, Pad] Pad_output_size = [4, 224, 224] Pad_ceil_mode = False @@ -116,15 +117,31 @@ sliding_window_stride = [3, 224, 224] The following commands are used for training and inference with this method, respectively: ```bash -pymic_run train config/unet2d_baseline.cfg -pymic_run test config/unet2d_baseline.cfg +pymic_train config/unet2d_baseline.cfg +pymic_test config/unet2d_baseline.cfg +``` + +### Data configuration for other weakly supervised learning +For other weakly supervised learning methods, please set `supervise_type = weak_sup` in the configuration. + +```bash +tensor_type = float +task_type = seg +supervise_type = weak_sup + +root_dir = ../../PyMIC_data/ACDC/preprocess +train_csv = config/data/image_train.csv +valid_csv = config/data/image_valid.csv +test_csv = config/data/image_test.csv +... ``` ### Entropy Minimization The configuration file for Entropy Minimization is `config/unet2d_em.cfg`. The data configuration has been described above, and the settings for data augmentation, network, optmizer, learning rate scheduler and inference are the same as those in the baseline method. Specific setting for Entropy Minimization is: ```bash -wsl_method = EntropyMinimization +[weakly_supervised_learning] +method_name = EntropyMinimization regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 @@ -135,15 +152,16 @@ where wet the weight of the regularization loss as 0.1, rampup is used to gradua The following commands are used for training and inference with this method, respectively: ```bash -pymic_wsl train config/unet2d_em.cfg -pymic_run test config/unet2d_em.cfg +pymic_train config/unet2d_em.cfg +pymic_test config/unet2d_em.cfg ``` ### TV The configuration file for TV is `config/unet2d_tv.cfg`. The corresponding setting is: ```bash -wsl_method = TotalVariation +[weakly_supervised_learning] +method_name = TotalVariation regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 @@ -151,15 +169,16 @@ rampup_end = 15000 The following commands are used for training and inference with this method, respectively: ```bash -pymic_wsl train config/unet2d_tv.cfg -pymic_run test config/unet2d_tv.cfg +pymic_train config/unet2d_tv.cfg +pymic_test config/unet2d_tv.cfg ``` ### Gated CRF The configuration file for Gated CRF is `config/unet2d_gcrf.cfg`. The corresponding setting is: ```bash -wsl_method = GatedCRF +[weakly_supervised_learning] +method_name = GatedCRF regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 @@ -174,15 +193,16 @@ GatedCRFLoss_Radius = 5 The following commands are used for training and inference with this method, respectively: ```bash -pymic_wsl train config/unet2d_gcrf.cfg -pymic_run test config/unet2d_gcrf.cfg +pymic_train config/unet2d_gcrf.cfg +pymic_test config/unet2d_gcrf.cfg ``` ### USTM The configuration file for USTM is `config/unet2d_ustm.cfg`. The corresponding setting is: ```bash -wsl_method = USTM +[weakly_supervised_learning] +method_name = USTM regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 @@ -191,15 +211,16 @@ rampup_end = 15000 The commands for training and inference are: ```bash -pymic_wsl train config/unet2d_ustm.cfg -pymic_run test config/unet2d_ustm.cfg +pymic_train config/unet2d_ustm.cfg +pymic_test config/unet2d_ustm.cfg ``` ### DMPLS The configuration file for DMPLS is `config/unet2d_dmpls.cfg`, and the corresponding setting is: ```bash -wsl_method = DMPLS +[weakly_supervised_learning] +method_name = DMPLS regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 @@ -208,8 +229,8 @@ rampup_end = 15000 The training and inference commands are: ```bash -pymic_ssl train config/unet2d_dmpls.cfg -pymic_run test config/unet2d_dmpls.cfg +pymic_train config/unet2d_dmpls.cfg +pymic_test config/unet2d_dmpls.cfg ``` ## Evaluation diff --git a/seg_wsl/ACDC/config/unet2d_baseline.cfg b/seg_wsl/ACDC/config/unet2d_baseline.cfg index ce141ab..c1b3176 100644 --- a/seg_wsl/ACDC/config/unet2d_baseline.cfg +++ b/seg_wsl/ACDC/config/unet2d_baseline.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = fully_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess train_csv = config/data/image_train.csv valid_csv = config/data/image_valid.csv diff --git a/seg_wsl/ACDC/config/unet2d_dmpls.cfg b/seg_wsl/ACDC/config/unet2d_dmpls.cfg index bc849fa..83e9353 100644 --- a/seg_wsl/ACDC/config/unet2d_dmpls.cfg +++ b/seg_wsl/ACDC/config/unet2d_dmpls.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = weak_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess train_csv = config/data/image_train.csv valid_csv = config/data/image_valid.csv @@ -70,7 +71,7 @@ iter_valid = 100 iter_save = [2000, 20000] [weakly_supervised_learning] -wsl_method = DMPLS +method_name = DMPLS regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 diff --git a/seg_wsl/ACDC/config/unet2d_em.cfg b/seg_wsl/ACDC/config/unet2d_em.cfg index cec4df4..c916e6e 100644 --- a/seg_wsl/ACDC/config/unet2d_em.cfg +++ b/seg_wsl/ACDC/config/unet2d_em.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = weak_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess train_csv = config/data/image_train.csv valid_csv = config/data/image_valid.csv @@ -72,7 +73,7 @@ iter_valid = 100 iter_save = [2000, 20000] [weakly_supervised_learning] -wsl_method = EntropyMinimization +method_name = EntropyMinimization regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 diff --git a/seg_wsl/ACDC/config/unet2d_gcrf.cfg b/seg_wsl/ACDC/config/unet2d_gcrf.cfg index c3e84a9..94f5baa 100644 --- a/seg_wsl/ACDC/config/unet2d_gcrf.cfg +++ b/seg_wsl/ACDC/config/unet2d_gcrf.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = weak_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess train_csv = config/data/image_train.csv valid_csv = config/data/image_valid.csv @@ -71,7 +72,7 @@ iter_valid = 100 iter_save = [2000, 20000] [weakly_supervised_learning] -wsl_method = GatedCRF +method_name = GatedCRF regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 diff --git a/seg_wsl/ACDC/config/unet2d_mumford.cfg b/seg_wsl/ACDC/config/unet2d_mumford.cfg index e0b4dfa..c087eb2 100644 --- a/seg_wsl/ACDC/config/unet2d_mumford.cfg +++ b/seg_wsl/ACDC/config/unet2d_mumford.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = weak_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess train_csv = config/data/image_train.csv valid_csv = config/data/image_valid.csv @@ -71,7 +72,7 @@ iter_valid = 100 iter_save = [2000, 20000] [weakly_supervised_learning] -wsl_method = MumfordShah +method_name = MumfordShah regularize_w = 0.05 rampup_start = 2000 rampup_end = 15000 diff --git a/seg_wsl/ACDC/config/unet2d_tv.cfg b/seg_wsl/ACDC/config/unet2d_tv.cfg index f1c4ecd..23327c1 100644 --- a/seg_wsl/ACDC/config/unet2d_tv.cfg +++ b/seg_wsl/ACDC/config/unet2d_tv.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = weak_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess train_csv = config/data/image_train.csv valid_csv = config/data/image_valid.csv @@ -71,7 +72,7 @@ iter_valid = 100 iter_save = [2000, 20000] [weakly_supervised_learning] -wsl_method = TotalVariation +method_name = TotalVariation regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 diff --git a/seg_wsl/ACDC/config/unet2d_ustm.cfg b/seg_wsl/ACDC/config/unet2d_ustm.cfg index aa6479d..059c464 100644 --- a/seg_wsl/ACDC/config/unet2d_ustm.cfg +++ b/seg_wsl/ACDC/config/unet2d_ustm.cfg @@ -1,8 +1,9 @@ [dataset] # tensor type (float or double) -tensor_type = float +tensor_type = float +task_type = seg +supervise_type = weak_sup -task_type = seg root_dir = ../../PyMIC_data/ACDC/preprocess train_csv = config/data/image_train.csv valid_csv = config/data/image_valid.csv @@ -71,7 +72,7 @@ iter_valid = 100 iter_save = [2000, 30000] [weakly_supervised_learning] -wsl_method = USTM +method_name = USTM regularize_w = 0.1 rampup_start = 2000 rampup_end = 15000 From cdf6f86172f6ce3aa17f6cb75e2a472b5d587b0e Mon Sep 17 00:00:00 2001 From: taigw Date: Thu, 23 Feb 2023 13:35:18 +0800 Subject: [PATCH 2/4] update readme replace pymic_run with pymic_train or pymic_test --- classification/AntBee/README.md | 4 ++-- classification/AntBee/config/train_test_ce1.cfg | 3 ++- classification/AntBee/config/train_test_ce2.cfg | 3 ++- classification/CHNCXR/README.md | 4 ++-- classification/CHNCXR/config/net_resnet18.cfg | 3 ++- classification/CHNCXR/config/net_vgg16.cfg | 3 ++- segmentation/JSRT/README.md | 4 ++-- segmentation/fetal_hc/README.md | 4 ++-- segmentation/prostate/README.md | 4 ++-- 9 files changed, 18 insertions(+), 14 deletions(-) diff --git a/classification/AntBee/README.md b/classification/AntBee/README.md index 9f4c58e..6ce2467 100755 --- a/classification/AntBee/README.md +++ b/classification/AntBee/README.md @@ -26,7 +26,7 @@ update_mode = all Then start to train by running: ```bash -pymic_run train config/train_test_ce1.cfg +pymic_train config/train_test_ce1.cfg ``` 2. During training or after training, run `tensorboard --logdir model/resnet18_ce1` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 400, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18_ce1`. @@ -39,7 +39,7 @@ pymic_run train config/train_test_ce1.cfg ```bash mkdir result -pymic_run test config/train_test_ce1.cfg +pymic_test config/train_test_ce1.cfg ``` 2. Then run the following command to obtain quantitative evaluation results in terms of accuracy. diff --git a/classification/AntBee/config/train_test_ce1.cfg b/classification/AntBee/config/train_test_ce1.cfg index c8cf7b6..92b3790 100755 --- a/classification/AntBee/config/train_test_ce1.cfg +++ b/classification/AntBee/config/train_test_ce1.cfg @@ -77,5 +77,6 @@ gpus = [0] # checkpoint mode can be [0-latest, 1-best, 2-specified] ckpt_mode = 1 -output_csv = result/resnet18_ce1.csv +output_dir = result +output_csv = resnet18_ce1.csv save_probability = True diff --git a/classification/AntBee/config/train_test_ce2.cfg b/classification/AntBee/config/train_test_ce2.cfg index 6bf0827..4444916 100755 --- a/classification/AntBee/config/train_test_ce2.cfg +++ b/classification/AntBee/config/train_test_ce2.cfg @@ -77,5 +77,6 @@ gpus = [0] # checkpoint mode can be [0-latest, 1-best, 2-specified] ckpt_mode = 1 -output_csv = result/resnet18_ce2.csv +output_dir = result +output_csv = restnet18_ce2.csv save_probability = True diff --git a/classification/CHNCXR/README.md b/classification/CHNCXR/README.md index 6f690a5..7612bac 100755 --- a/classification/CHNCXR/README.md +++ b/classification/CHNCXR/README.md @@ -26,7 +26,7 @@ update_mode = all Start to train by running: ```bash -pymic_run train config/net_resnet18.cfg +pymic_train config/net_resnet18.cfg ``` 2. During training or after training, run `tensorboard --logdir model/resnet18` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average loss and accuracy during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. The iteration number obtained the highest accuracy on the validation set was 1800, and may be different based on the hardware environment. After training, you can find the trained models in `./model/resnet18`. @@ -39,7 +39,7 @@ pymic_run train config/net_resnet18.cfg ```bash mkdir result -pymic_run test config/net_resnet18.cfg +pymic_test config/net_resnet18.cfg ``` 2. Then run the following command to obtain quantitative evaluation results in terms of accuracy. diff --git a/classification/CHNCXR/config/net_resnet18.cfg b/classification/CHNCXR/config/net_resnet18.cfg index d0551d9..b9554f8 100755 --- a/classification/CHNCXR/config/net_resnet18.cfg +++ b/classification/CHNCXR/config/net_resnet18.cfg @@ -75,5 +75,6 @@ gpus = [0] # checkpoint mode can be [0-latest, 1-best, 2-specified] ckpt_mode = 1 -output_csv = result/resnet18.csv +output_dir = result +output_csv = resnet18.csv save_probability = True diff --git a/classification/CHNCXR/config/net_vgg16.cfg b/classification/CHNCXR/config/net_vgg16.cfg index 9735a56..e435ebf 100755 --- a/classification/CHNCXR/config/net_vgg16.cfg +++ b/classification/CHNCXR/config/net_vgg16.cfg @@ -75,5 +75,6 @@ gpus = [0] # checkpoint mode can be [0-latest, 1-best, 2-specified] ckpt_mode = 1 -output_csv = result/vgg16.csv +output_dir = result +output_csv = vgg16.csv save_probability = True diff --git a/segmentation/JSRT/README.md b/segmentation/JSRT/README.md index de3fb41..c7b7ff9 100755 --- a/segmentation/JSRT/README.md +++ b/segmentation/JSRT/README.md @@ -16,7 +16,7 @@ In this example, we use 2D U-Net to segment the lung from X-Ray images. First we 1. Start to train by running: ```bash -pymic_run train config/unet.cfg +pymic_train config/unet.cfg ``` 2. During training or after training, run `tensorboard --logdir model/unet` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average Dice score and loss during the training stage, such as shown in the following images, where red and blue curves are for training set and validation set respectively. We can observe some over-fitting on the training set. @@ -28,7 +28,7 @@ pymic_run train config/unet.cfg 1. Run the following command to obtain segmentation results of testing images. By default we use the latest checkpoint. You can set `ckpt_mode` to 1 in `config/unet.cfg` to use the best performing checkpoint based on the validation set. ```bash -pymic_run test config/unet.cfg +pymic_test config/unet.cfg ``` 2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder_root` as your `JSRT_root`, and run the following command to obtain quantitative evaluation results in terms of dice. diff --git a/segmentation/fetal_hc/README.md b/segmentation/fetal_hc/README.md index e68fe76..124b702 100755 --- a/segmentation/fetal_hc/README.md +++ b/segmentation/fetal_hc/README.md @@ -15,7 +15,7 @@ In this example, we use 2D U-Net to segment the fetal brain from ultrasound imag 1. Start to train by running: ```bash -pymic_run train config/unet.cfg +pymic_train config/unet.cfg ``` 2. During training or after training, run `tensorboard --logdir model/unet` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average Dice score and loss during the training stage, such as shown in the following images, where red and blue curves are for training set and validation set respectively. @@ -27,7 +27,7 @@ pymic_run train config/unet.cfg 1. Run the following command to obtain segmentation results of testing images based on the best-performing checkpoint on the validation set. By default we use sliding window inference to get better results. You can also edit the `testing` section of `config/unet.cfg` to use other inference strategies. ```bash -pymic_run test config/unet.cfg +pymic_test config/unet.cfg ``` 2. Use the following command to obtain quantitative evaluation results in terms of Dice. diff --git a/segmentation/prostate/README.md b/segmentation/prostate/README.md index 3aff3a9..1befb3f 100755 --- a/segmentation/prostate/README.md +++ b/segmentation/prostate/README.md @@ -13,7 +13,7 @@ In this example, we use 3D U-Net with deep supervision to segment the prostate f 1. Start to train by running: ```bash -pymic_run train config/unet3d.cfg +pymic_train config/unet3d.cfg ``` Note that we set `multiscale_pred = True`, `deep_supervise = True` and `loss_type = [DiceLoss, CrossEntropyLoss]` in the configure file. We also use Mixup for data @@ -28,7 +28,7 @@ augmentation by setting `mixup_probability=0.5`. 1. Run the following command to obtain segmentation results of testing images. By default we set `ckpt_mode` to 1, which means using the best performing checkpoint based on the validation set. ```bash -pymic_run test config/unet3d.cfg +pymic_test config/unet3d.cfg ``` 2. Run the following command to obtain quantitative evaluation results in terms of Dice. From 4ce68726783db3ca207d7ac2e911cc2007d77b7a Mon Sep 17 00:00:00 2001 From: taigw Date: Thu, 23 Feb 2023 21:44:50 +0800 Subject: [PATCH 3/4] fix typo fix typo --- classification/AntBee/config/train_test_ce2.cfg | 2 +- segmentation/JSRT/config/evaluation.cfg | 2 +- segmentation/JSRT2/net_run_jsrt.py | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/classification/AntBee/config/train_test_ce2.cfg b/classification/AntBee/config/train_test_ce2.cfg index 4444916..6d47c36 100755 --- a/classification/AntBee/config/train_test_ce2.cfg +++ b/classification/AntBee/config/train_test_ce2.cfg @@ -78,5 +78,5 @@ gpus = [0] # checkpoint mode can be [0-latest, 1-best, 2-specified] ckpt_mode = 1 output_dir = result -output_csv = restnet18_ce2.csv +output_csv = resnet18_ce2.csv save_probability = True diff --git a/segmentation/JSRT/config/evaluation.cfg b/segmentation/JSRT/config/evaluation.cfg index dae8519..d4f0030 100755 --- a/segmentation/JSRT/config/evaluation.cfg +++ b/segmentation/JSRT/config/evaluation.cfg @@ -4,7 +4,7 @@ label_list = [255] organ_name = lung ground_truth_folder_root = ../../PyMIC_data/JSRT -segmentation_folder_root = result +segmentation_folder_root = result/unet evaluation_image_pair = ./config/jsrt_test_gt_seg.csv diff --git a/segmentation/JSRT2/net_run_jsrt.py b/segmentation/JSRT2/net_run_jsrt.py index 93e690b..6ebecad 100644 --- a/segmentation/JSRT2/net_run_jsrt.py +++ b/segmentation/JSRT2/net_run_jsrt.py @@ -23,7 +23,7 @@ def main(): config = synchronize_config(config) log_dir = config['training']['ckpt_save_dir'] if(not os.path.exists(log_dir)): - os.mkdir(log_dir) + os.makedirs(log_dir) if sys.version.startswith("3.9"): logging.basicConfig(filename=log_dir+"/log_{0:}.txt".format(stage), level=logging.INFO, format='%(message)s', force=True) # for python 3.9 From 211421dfd62772bf75eaf14e40b7189255ba0118 Mon Sep 17 00:00:00 2001 From: taigw Date: Sun, 26 Feb 2023 16:22:19 +0800 Subject: [PATCH 4/4] Update README.md update version --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 84a7f25..59e7f33 100644 --- a/README.md +++ b/README.md @@ -2,10 +2,10 @@ [PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions. ## Install PyMIC -The latest released version of PyMIC can be installed by: +The released version of PyMIC (v0.4.0) is required for these examples, and it can be installed by: ```bash -pip install PYMIC==0.3.1.1 +pip install PYMIC==0.4.0 ``` To use the latest development version, you can download the source code [here][PyMIC_link], and install it by: