Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to verify the performance of this model #7

Open
kingjames1155 opened this issue May 5, 2022 · 8 comments
Open

How to verify the performance of this model #7

kingjames1155 opened this issue May 5, 2022 · 8 comments

Comments

@kingjames1155
Copy link

No description provided.

@kingjames1155
Copy link
Author

How to test the effect of the model, such as the NCC score mentioned in the paper

@kingjames1155
Copy link
Author

Sorry to bother you

How to visualize the final segmentation result

@fhaghighi
Copy link
Owner

How to test the effect of the model, such as the NCC score mentioned in the paper

If you want to evaluate the pre-trained Semantic Genesis on NCC (or any other target task), you first need to load the pre-trained model and then fine-tune it on the target task.
The instructions for fine-tuning Semantic Genesis on any target task can be found under the “Fine-tune Semantic Genesis on your own target task” section in Pytorch and Keras directories.

@fhaghighi
Copy link
Owner

How to visualize the final segmentation result

After you fine-tuned the pre-trained model on a segmentation target task, you can give the images as input to the target model and get the segmentation prediction; then visualize the segmentation prediction.

@kingjames1155
Copy link
Author

How to visualize the final segmentation result

After you fine-tuned the pre-trained model on a segmentation target task, you can give the images as input to the target model and get the segmentation prediction; then visualize the segmentation prediction.

Thank you very much for your answer, and thank you for your open source code

@kingjames1155
Copy link
Author

when i use train_autoencoder.py with luna16,Loss has been stable and does not decline,Just like the following
Have you ever encountered this problem? Have you preprocessed the dataset?

Epoch 00035: val_loss did not improve from 0.00045
Epoch 36/10000
108/108 [==============================] - 99s 912ms/step - loss: 638525.9527 - MAE: 400.1001 - MSE: 638525.8750 - val_loss: 1640486.8750 - val_MAE: 452.1590 - val_MSE: 946725.3750

Epoch 00036: val_loss did not improve from 0.00045
Epoch 37/10000
108/108 [==============================] - 97s 895ms/step - loss: 638529.2542 - MAE: 400.1557 - MSE: 638529.2500 - val_loss: 0.8887 - val_MAE: 452.6182 - val_MSE: 946749.8125

Epoch 00037: val_loss did not improve from 0.00045
Epoch 38/10000
108/108 [==============================] - 99s 916ms/step - loss: 638527.1660 - MAE: 400.1256 - MSE: 638527.2500 - val_loss: 2146421.5000 - val_MAE: 452.1830 - val_MSE: 946729.4375

Epoch 00038: val_loss did not improve from 0.00045
Epoch 39/10000
108/108 [==============================] - 98s 911ms/step - loss: 638526.7634 - MAE: 400.1210 - MSE: 638526.6875 - val_loss: 1051522.8750 - val_MAE: 452.1828 - val_MSE: 946729.1250

Epoch 00039: val_loss did not improve from 0.00045
Epoch 40/10000
108/108 [==============================] - 97s 900ms/step - loss: 638526.7060 - MAE: 400.1107 - MSE: 638526.6875 - val_loss: 0.0037 - val_MAE: 452.1874 - val_MSE: 946730.3125

Epoch 00040: val_loss did not improve from 0.00045
Epoch 41/10000
108/108 [==============================] - 98s 907ms/step - loss: 638526.3897 - MAE: 400.1137 - MSE: 638526.4375 - val_loss: 599951.1250 - val_MAE: 452.1618 - val_MSE: 946725.7500

Epoch 00041: val_loss did not improve from 0.00045
Epoch 42/10000
108/108 [==============================] - 97s 898ms/step - loss: 638526.3547 - MAE: 400.1018 - MSE: 638526.3750 - val_loss: 4.4956e-04 - val_MAE: 452.1776 - val_MSE: 946728.0000

Epoch 00042: val_loss improved from 0.00045 to 0.00045, saving model to Checkpoints/Autoencoder/Unet_autoencoder.h5
Epoch 43/10000
82/108 [=====================>........] - ETA: 22s - loss: 633764.8769 - MAE: 395.5724 - MSE: 633764.8750

@kingjames1155
Copy link
Author

Whether I convert the data in luna16 into .npy or just still used directly,the loss does not decline with train_autoencoder.py and feature_extractor.py

@fhaghighi
Copy link
Owner

No. The loss should continuously decrease.

For preprocessing, we resample data to [1.0, 1.0, 1.0] space. We clip the data at 1000 (max) and -1000 (min) thresholds and normalize the data as (x-min(x))/(max(x) - min (x)).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants