-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No speedup and memory saving on CIFAR10 #13
Comments
It’s because BinOp is still 32 or 16 bits. There is no packing optimization here. Sad. |
@fenollp Thanks for your reply. What would you suggest to do if I would like to achieve the binary optimization? modify PyTorch core? |
@guangzhili useful discussion here here |
@guangzhili An XNOR operation kernel is required to get an acceleration. I am implementing the kernels as part of my research project. I will release the code after I get the paper published somewhere. |
Thank you @cow8 That's very helpful! |
@jiecaoyu Sounds great. Good luck with the paper. |
@guangzhili look forward to see your paper ,good luck |
My NIN model without BinOp is 946.7k while model with BinOp is 3.9M. That's weird.@jiecaoyu |
I have played around with CIFAR10 and also done a bit benchmark. It seems BinOp does not have noticeable effect on model size and inference speed compared to NIN model without BinOp. I have tested both on CPU and GPU. I thought the saved model nin.pth.tar would shrink, and the inference would speed up significantly. Do I miss something? Does anyone have this issue? Thanks.
The text was updated successfully, but these errors were encountered: