Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantize function tensor.clamp_() #26

Open
HexuanLiu opened this issue Jul 28, 2020 · 0 comments
Open

Quantize function tensor.clamp_() #26

HexuanLiu opened this issue Jul 28, 2020 · 0 comments

Comments

@HexuanLiu
Copy link

In the Quantize function (binarized_modules.py, line 57), I don't quite understand why the range for tensor.clamp_() is from -128 to 128 if I want to quantize them with numBits=8. Since all the outputs from previous layers go through a Hardtanh function, should they be in the range [-1, 1] instead? Also, how are they converted to 8 bits if they are in the range [-128, 128]? e.g. if the input tensor is 127.125 and numBits=8, tensor.mul(2**(numBits-1)).round().div(2**(numBits-1)) gives me 127.1250. How is that stored in 8 bits?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant