Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Activations in this model are ternary {-1,0,1}, not binary {-1,1} #23

Open
mrcslws opened this issue Nov 3, 2019 · 3 comments
Open

Comments

@mrcslws
Copy link

mrcslws commented Nov 3, 2019

This code uses tensor.sign() to binarize the activations and weights.

return tensor.sign()

The desired behavior is to always return -1 or 1, but sign() returns 0 for values that are 0.

Batch normalization makes 0 less probable, but it can still happen. The code should probably force every activation to be either -1 or 1.

@Ronalmoo
Copy link

Agree.

@donghn
Copy link

donghn commented Dec 24, 2020

Actually, No activation or weights equal to 0. I already checked it.

@m-mb
Copy link

m-mb commented Mar 3, 2022

I faced the same issue and in my case weights and activations with a value of 0 appear quite often.
Replacing the tensor.sign() with torch.where(tensor >= 0, 1., -1.) does the trick

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants