-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about accuracy evaluation? #23
Comments
It's the prediction accuracy between real and generated images. We use the pre-trained Alexnet model to obtain annotations for real images and class predictions for generated images. If the model predicts the real and synthesized image pair to be of same class, then its a hit otherwise its a miss. |
All evaluation metrics are too slow, do you have any ideas to speed them? |
Sadly they are slow. A possible way to make it faster is to compute features in batches, currently it is written to work on single image at a time. |
I also noted that you did not use GPUs, is there any possibility to use GPU while evaluation. |
Not sure, that part of code was borrowed from some other people, I didn't spend time on making it quicker. |
@kregmi for the result on the image, am I correct? |
For compute_accuracies.py, I just want to make sure, is the
final result = total match found for synthesized / total images into consideration?
The text was updated successfully, but these errors were encountered: