-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Performance Do Not Improve #8
Comments
Hi AuliaRizky, You may need to try a single image to overfit the model first. If it works, then you know the model is complex enough for your task. Or you may need to make the model more powerful for your task. With kind regards, |
Hi @Cheng-Lin-Li Thanks for your help Update: Now I understand, i'll try it Update 5 february: Even when I tried to feed the ground truth image as the training set the result showed the same performance. And by adding more layer seems not change the results. Do you have any recommendation what part should I check? Also can you explain what is the ConvCapsLayer do? Thank you |
Hi AuliaRizky,
|
Hi @Cheng-Lin-Li, And I think this is the problem when doing training process with all the dataset. The model do not output value that have significant different between pixel that suppose to be 1 or 0 . So that the model do not learn well. Do you have any suggestion to make the output have significant distinction between the backgroung (0 region) and the ROI (1 region) ? |
Does this result of training process that I got reasonable and should i proceed to the end of the epoch?
It looks like the dice_hard do not improve and the optimizer has achieved local minima.
I use MRI dataset from ISLES 2017 and has adjusted the load data process without using K Fold.
The text was updated successfully, but these errors were encountered: