You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there any way to define a layer that for forward-pass calculates its output by projecting the weight values into a different value range, but for the back-propagation uses the real weight values? i.e., as an example from the mnist.py file, if for the hidden layer 1 forward pass we could have hidden1 = tf.nn.relu(tf.matmul(images, projection_func(weights)) + biases), but for the gradient calculation during back-propagation the weights were updated without projection_func().
This would be useful to implement BNNs, to apply functions such as the ones mentioned by @Jony101Khere.
Thanks.
Alexandre
The text was updated successfully, but these errors were encountered:
fromtensorflow.python.framework.functionimportDefundef_projection_func_grad(op, grad):
# compute the (fake) gradient, in this case just pass it throughreturngrad@Defun(tf.float32, python_grad_func=_projection_func_grad)defprojection_func(weights):
# do somethingreturnweights
This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks!
Hi,
Is there any way to define a layer that for forward-pass calculates its output by projecting the weight values into a different value range, but for the back-propagation uses the real weight values? i.e., as an example from the mnist.py file, if for the hidden layer 1 forward pass we could have
hidden1 = tf.nn.relu(tf.matmul(images, projection_func(weights)) + biases)
, but for the gradient calculation during back-propagation the weights were updated withoutprojection_func()
.This would be useful to implement BNNs, to apply functions such as the ones mentioned by @Jony101K here.
Thanks.
Alexandre
The text was updated successfully, but these errors were encountered: