-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
for a good result #5
Comments
If you run the code directly and correctly, the result would be slightly worse than mine (2d landmark nme is about 3.30±0.03 ) since the number of parameters is less than in PRN's paper. |
I train your 30 epoch ,but just got 2d nme 3.8 |
How about the NME on training data? |
I do not test for training data |
Sorry, I mean the printed 'metrics0' of training dataset and evaluation dataset |
I'm sorry I didn't record it |
I use the datasets for official generation method, not use your method。Does this affect the effect? |
I reload the model ,and got this result [epoch:0, iter:111/7653, time:51] Loss: 0.1049 Metrics0: 0.0379 |
I didn't try it. There are some differences between our generation codes but I don't think they will affect the performance. The metrics0 should reach 0.03 in less than 10 epochs. Try to use my generation code. And try to change the line 96 in torchmodel.py as below and remember to record metrics0:
|
ok,I will try it. Thanks a lot. |
I do it follwing your all code,but its effect is still not good, this is result: [epoch:29, iter:7654/7653, time:1802] Loss: 0.0329 Metrics0: 0.0130 nme2d 0.04015569452557179 Look forward to your reply. |
The result on training set is good and better than mine. But the evaluation result is bad. |
I've updated it. Sorry for the trouble. |
thanks |
I'm sorry to bother you again,I use your augmentation codes, and train about 45 epoch ,just get nme2d 0.03363224604973234 |
I trained it myself again and I get nme3d 0.0445 in 30 epochs. self.scheduler = optim.lr_scheduler.StepLR(self.optimizer, step_size=5, gamma=0.5) and set the learning rate to 2.5e-5. I use this scheduler long time ago since it takes more epochs. |
for get nme2d=0.031,how many epochs have you trained? |
I suggest you adjust the learning rate by increasing or decreasing it tenfold before change the scheduler to see if the result becomes better. |
I don't remember, but 45 epochs is enough. |
I
I first train 30 epochs using lr=2e-4 ,and get nme2d 0.345,then decrease lr to 2e-5 ,retrain 45 epochs,get nme2d 0.336. |
It's strange...... Could you use a even smaller learning rate (lr=8e-6) to train it from the beginning? I intuitively think it will help. |
ok ,I will try it. |
Excuse me again,if I use randomcolor in your augmentation codes, the nme is always about 0.04,can't drop to 0.03,Is this normal? |
And if I use smaller learning rate (lr=8e-6) to train it from the beginning, the nme is drop slower than before(lr=1e-4). |
I don't use the RandomColor function in practice, forget it. |
I'm not got a good result for use smaller learning rate or use optim.lr_scheduler.StepLR. The best result is nme2d 0.336. |
Hi,Would you mind telling me how your model is trained? I didn't use the code to achieve your model effect.
The text was updated successfully, but these errors were encountered: