You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Thank you for your pytorch version of BinaryNet.
I am wondering is there any reduction in memory. I call the function Quantize() in the file binary_modules so that I can compact each parameter to 8 bits. However, CPU still allocate 32bits to each float number, as aresult, there is no memory reduction ? Do you have any ideas?
Looking forward to your reply
The text was updated successfully, but these errors were encountered:
Hi, Thank you for your pytorch version of BinaryNet.
I am wondering is there any reduction in memory. I call the function Quantize() in the file binary_modules so that I can compact each parameter to 8 bits. However, CPU still allocate 32bits to each float number, as aresult, there is no memory reduction ? Do you have any ideas?
Looking forward to your reply
The text was updated successfully, but these errors were encountered: