You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I built a model to colorize a grayscale image, during the training phase i feed the network 100 RGB images of a forest, and then i convert the images to the LAB color space to split the training set to L and AB,
Based on the trained AB data, the model will predict these two channels for the grayscale input image during the testing phase.
Now i have a problem, i trained the model with a different architecture than this one with 10 images, the loss decreased to 0.0035 and it worked good, for that, i wanted to increase the size of the dataset to acquire a better result, but in exchange, the loss and the accuracy kept being constant and the model output is a mess,
My code is the following, i wish anyone can direct me of what i am doing wrong, is it because of the optimizer? the loss function? the batch size? or anything else i'm not aware of,
Thank you in advance.
Load images
MODEL_NAME = 'forest'
X = []
Y = []
for filename in os.listdir('forest/'):
if (filename != '.DS_Store'):
image = img_to_array(load_img("forest/" + filename))
image - np.array(image, dtype=float)
imL = rgb2lab(1.0 / 255 * image)[:, :,0]
X.append(imL)
imAB = rgb2lab(1.0 / 255 * image)[:, :,1:]
imAB = imAB/128
Y.append(imAB)
X = np.array(X)
Y = np.array(Y)
X = X.reshape(1, 256 , np.size(X)/256, 1)
Y = Y.reshape(1, 256, np.size(Y)/256/2, 2)
Hello,
I built a model to colorize a grayscale image, during the training phase i feed the network 100 RGB images of a forest, and then i convert the images to the LAB color space to split the training set to L and AB,
Based on the trained AB data, the model will predict these two channels for the grayscale input image during the testing phase.
Now i have a problem, i trained the model with a different architecture than this one with 10 images, the loss decreased to 0.0035 and it worked good, for that, i wanted to increase the size of the dataset to acquire a better result, but in exchange, the loss and the accuracy kept being constant and the model output is a mess,
My code is the following, i wish anyone can direct me of what i am doing wrong, is it because of the optimizer? the loss function? the batch size? or anything else i'm not aware of,
Thank you in advance.
Load images
MODEL_NAME = 'forest'
X = []
Y = []
for filename in os.listdir('forest/'):
if (filename != '.DS_Store'):
image = img_to_array(load_img("forest/" + filename))
image - np.array(image, dtype=float)
imL = rgb2lab(1.0 / 255 * image)[:, :,0]
X.append(imL)
imAB = rgb2lab(1.0 / 255 * image)[:, :,1:]
imAB = imAB/128
Y.append(imAB)
X = np.array(X)
Y = np.array(Y)
X = X.reshape(1, 256 , np.size(X)/256, 1)
Y = Y.reshape(1, 256, np.size(Y)/256/2, 2)
Building the neural network
model = Sequential()
model.add(InputLayer(input_shape=(256, np.size(X)/256, 1)))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', strides=1))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(2, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(2, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))
Finish model
model.compile(optimizer='rmsprop',loss='mse', metrics=['acc'])
Train the neural network
model.fit(x=X, y=Y, batch_size=100, epochs=1000)
output
Epoch 1/1000
1/1 [==============================] - 7s 7s/step - loss: 0.0214 - acc: 0.4987
Epoch 2/1000
1/1 [==============================] - 7s 7s/step - loss: 0.0214 - acc: 0.4987
Epoch 3/1000
1/1 [==============================] - 9s 9s/step - loss: 0.0214 - acc: 0.4987
Epoch 4/1000
1/1 [==============================] - 8s 8s/step - loss: 0.0214 - acc: 0.4987
.
.
.
.
The text was updated successfully, but these errors were encountered: