You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. Without exactly remembering all the details, I think this is why:
I usually add some small constant eps when going to the log domain, i.e. x_log = log(x+eps), which would mean that when going back to linear from log you would do x = exp(x_log) - eps
The notation with '' at the end is for the ground truth, so that y and y_lum_lin_ corresponds to the ground truth image and luminance, respectively. y and y_lum_lin, on the other hand, refers to the reconstructed image and luminance, predicted by the network.
Hi.
I'm reading the paper and can't understand some part of loss function.
So I have questions about loss function code.
(1) Why didn't you add eps?
y_lum_lin = tf.nn.conv2d(tf.exp(y)-eps, lum_kernel, [1, 1, 1, 1], padding='SAME')
(2) Why did you tf.log twice to y_lum?
y_lum_ = tf.log(y_lum_lin_ + eps)
y_lum = tf.log(y_lum_lin + eps)
x_lum = tf.log(x_lum_lin + eps)
The text was updated successfully, but these errors were encountered: