Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to evaluate my own trained model? #41

Open
playerkk opened this issue Nov 7, 2017 · 1 comment
Open

How to evaluate my own trained model? #41

playerkk opened this issue Nov 7, 2017 · 1 comment

Comments

@playerkk
Copy link

playerkk commented Nov 7, 2017

Hi,

I've trained a MultiNet2 (segmentation and detection) model. I was wondering how I can evaluate it on the validation set?

There is no evaluate.py in the root folder. I tried to run the evaluate.py in the submodule folder but didn't succeed. Perhaps I missed something. I'd appreciate it if you could give me some instructions. Thanks.

@playerkk
Copy link
Author

playerkk commented Nov 7, 2017

Actually, I got evaluation results in the log file when training the model. But there are "raw" and "smooth" results. Which one shall I use? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant