Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No speedup and memory saving on CIFAR10 #13

Open
guangzhili opened this issue Jan 31, 2018 · 8 comments
Open

No speedup and memory saving on CIFAR10 #13

guangzhili opened this issue Jan 31, 2018 · 8 comments

Comments

@guangzhili
Copy link

I have played around with CIFAR10 and also done a bit benchmark. It seems BinOp does not have noticeable effect on model size and inference speed compared to NIN model without BinOp. I have tested both on CPU and GPU. I thought the saved model nin.pth.tar would shrink, and the inference would speed up significantly. Do I miss something? Does anyone have this issue? Thanks.

@fenollp
Copy link

fenollp commented Jan 31, 2018

It’s because BinOp is still 32 or 16 bits. There is no packing optimization here. Sad.

@guangzhili
Copy link
Author

@fenollp Thanks for your reply. What would you suggest to do if I would like to achieve the binary optimization? modify PyTorch core?

@zhuyinheng
Copy link

@guangzhili useful discussion here here
And a GPU kernel based on TensorFlow can be found here
But unfortunately, the acceleration is NOT significant. As for compression, I think it's easy to implement.

@jiecaoyu
Copy link
Owner

jiecaoyu commented Feb 2, 2018

@guangzhili An XNOR operation kernel is required to get an acceleration.

I am implementing the kernels as part of my research project. I will release the code after I get the paper published somewhere.

@guangzhili
Copy link
Author

Thank you @cow8 That's very helpful!

@guangzhili
Copy link
Author

@jiecaoyu Sounds great. Good luck with the paper.

@mjczyt
Copy link

mjczyt commented Feb 28, 2018

@guangzhili look forward to see your paper ,good luck

@Paul0629
Copy link

Paul0629 commented Jun 28, 2018

My NIN model without BinOp is 946.7k while model with BinOp is 3.9M. That's weird.@jiecaoyu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants