Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing the problem of loss oscillation #3

Closed
rongyua opened this issue Apr 24, 2024 · 2 comments
Closed

Testing the problem of loss oscillation #3

rongyua opened this issue Apr 24, 2024 · 2 comments

Comments

@rongyua
Copy link

rongyua commented Apr 24, 2024

Hello, thank you very much for your contribution to the paper. However, when I reproduced the paper, I found that the training loss has been declining. The test loss dropped to the lowest point after running for several epochs and then began to rise and fall again. And sometimes best_test_c is not testing. Obtained when the loss is lowest, is this a normal phenomenon? I would be grateful if you could answer it.

@rongyua
Copy link
Author

rongyua commented Apr 24, 2024

Sorry, there was a problem with the translation of the first question, please ignore it.
Hello, thank you very much for your contribution to this article. However, when I reproduced the code, I found that the training loss kept going down. The test loss bottoms out after running for a few epochs and then starts to rise and fall again. And sometimes when test_loss is the lowest, the best_test_c index cannot be obtained. Is this normal? I would be grateful if you could answer.

@FT-ZHOU-ZZZ
Copy link
Owner

Sorry for late reply.
The case you mentioned is very common.
There is no absolutely positive correlation between overall loss and c-index.
Please check the implementation of nll_surv loss and calculation of c-index.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants