Skip to content

Commit

Permalink
Add Guided Backpropagation tutorial
Browse files Browse the repository at this point in the history
  • Loading branch information
1202kbs committed Jan 28, 2018
1 parent c58105b commit 7c5b665
Show file tree
Hide file tree
Showing 6 changed files with 416 additions and 14 deletions.
388 changes: 388 additions & 0 deletions 2.5 Guided Backpropagation.ipynb

Large diffs are not rendered by default.

19 changes: 16 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ It seems that Github is unable to render some of the equations in the notebooks.

[2.4 Backpropagation](http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Backpropagation.ipynb)

[2.5 Guided Backpropagation](http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Guided%20Backpropagation.ipynb)

[2.6 Class Activation Map](http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.6%20CAM.ipynb)

[3.1 Layer-wise Relevance Propagation Part 1](http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.1%20Layer-wise%20Relevance%20Propagation%20%281%29.ipynb)
Expand Down Expand Up @@ -94,6 +96,13 @@ In this section, we ask for a given data point x, what makes it representative o
![alt tag](https://github.com/1202kbs/Understanding-NN/blob/master/assets/2_4_BP/saliency2.png)


### 2.5 Guided Backpropagation

![alt tag](https://github.com/1202kbs/Understanding-NN/blob/master/assets/2_5_GBP/DNN_1.png)

![alt tag](https://github.com/1202kbs/Understanding-NN/blob/master/assets/2_5_GBP/DNN_2.png)


### 2.6 Class Activation Map

![alt tag](https://github.com/1202kbs/Understanding-NN/blob/master/assets/2_6_CAM/cam_1.png)
Expand Down Expand Up @@ -171,12 +180,16 @@ This code requires [Tensorflow](https://www.tensorflow.org/), [NumPy](http://www

#### Section 2.5

[6] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.
[6] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.

#### Section 2.6

[7] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.

#### Section 3.1

[7] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W., 07 2015. On pixel-wise explanations for non-linear classier decisions by layer-wise relevance propagation. PLOS ONE 10 (7), 1-46.
[8] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W., 07 2015. On pixel-wise explanations for non-linear classier decisions by layer-wise relevance propagation. PLOS ONE 10 (7), 1-46.

#### Section 3.2

[8] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R., 2017. Explaining nonlinear classication decisions with deep Taylor decomposition. Pattern Recognition 65, 211-222.
[9] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R., 2017. Explaining nonlinear classication decisions with deep Taylor decomposition. Pattern Recognition 65, 211-222.
Binary file added assets/2_5_GBP/DNN_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/2_5_GBP/DNN_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/2_5_GBP/fig1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 12 additions & 11 deletions models/models_2_5.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,30 +14,31 @@ def __call__(self, X, reuse=False):
scope.reuse_variables()

with tf.variable_scope('layer0'):
X_img = tf.reshape(X, [-1, 40, 40, 1])
X_img = tf.reshape(X, [-1, 28, 28, 1])

# Convolutional Layer #1 and Pooling Layer #1
with tf.variable_scope('layer1'):
conv1 = tf.layers.conv2d(inputs=X_img, filters=32, kernel_size=[3, 3], padding="SAME", activation=tf.nn.relu, use_bias=True)
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], padding="SAME", strides=2)
conv1 = tf.layers.conv2d(inputs=X_img, filters=32, kernel_size=[3, 3], strides=1, padding="SAME", activation=tf.nn.relu, use_bias=True)
pool1 = tf.layers.conv2d(inputs=conv1, filters=32, kernel_size=[2, 2], strides=2, padding="SAME", activation=tf.nn.relu, use_bias=True)

# Convolutional Layer #2 and Pooling Layer #2
with tf.variable_scope('layer2'):
conv2 = tf.layers.conv2d(inputs=pool1, filters=64, kernel_size=[3, 3], padding="SAME", activation=tf.nn.relu, use_bias=True)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], padding="SAME", strides=2)
conv2 = tf.layers.conv2d(inputs=pool1, filters=64, kernel_size=[3, 3], strides=1, padding="SAME", activation=tf.nn.relu, use_bias=True)
pool2 = tf.layers.conv2d(inputs=conv2, filters=64, kernel_size=[2, 2], strides=2, padding="SAME", activation=tf.nn.relu, use_bias=True)

# Convolutional Layer #3 and Pooling Layer #3
with tf.variable_scope('layer3'):
conv3 = tf.layers.conv2d(inputs=pool2, filters=128, kernel_size=[3, 3], padding="SAME", activation=tf.nn.relu, use_bias=True)
pool3 = tf.layers.average_pooling2d(inputs=conv3, pool_size=[10, 10], strides=1)
flat = tf.reshape(pool3, [-1, 128])
conv3 = tf.layers.conv2d(inputs=pool2, filters=128, kernel_size=[3, 3], strides=1, padding="SAME", activation=tf.nn.relu, use_bias=True)

# Logits (no activation) Layer: L5 Final FC 625 inputs -> 10 outputs
# Logits (no activation) Layer
with tf.variable_scope('layer4'):
logits = tf.layers.dense(inputs=flat, units=10, use_bias=False)
dense = tf.layers.conv2d(inputs=conv3, filters=128, kernel_size=[1, 1], strides=1, padding="SAME", activation=tf.nn.relu, use_bias=True)
pool4 = tf.layers.conv2d(inputs=dense, filters=10, kernel_size=[1, 1], strides=1, padding="SAME", activation=tf.nn.relu, use_bias=True)
global_avg = tf.layers.average_pooling2d(inputs=pool4, pool_size=[7, 7], strides=1, padding="VALID")
logits = tf.reshape(global_avg, [-1, 10])
prediction = tf.nn.softmax(logits)

return [X_img, conv1, pool1, conv2, pool2, conv3, pool3, flat, prediction], logits
return [X_img, conv1, pool1, conv2, pool2, conv3, dense, pool4, prediction], logits

@property
def vars(self):
Expand Down

0 comments on commit 7c5b665

Please sign in to comment.