Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss and accuracy don't change during the training phase #246

Open
mostafa-khaldi opened this issue Apr 12, 2018 · 0 comments
Open

Loss and accuracy don't change during the training phase #246

mostafa-khaldi opened this issue Apr 12, 2018 · 0 comments

Comments

@mostafa-khaldi
Copy link

mostafa-khaldi commented Apr 12, 2018

Hello,
I built a model to colorize a grayscale image, during the training phase i feed the network 100 RGB images of a forest, and then i convert the images to the LAB color space to split the training set to L and AB,
Based on the trained AB data, the model will predict these two channels for the grayscale input image during the testing phase.
Now i have a problem, i trained the model with a different architecture than this one with 10 images, the loss decreased to 0.0035 and it worked good, for that, i wanted to increase the size of the dataset to acquire a better result, but in exchange, the loss and the accuracy kept being constant and the model output is a mess,
My code is the following, i wish anyone can direct me of what i am doing wrong, is it because of the optimizer? the loss function? the batch size? or anything else i'm not aware of,
Thank you in advance.

Load images

MODEL_NAME = 'forest'
X = []
Y = []
for filename in os.listdir('forest/'):
if (filename != '.DS_Store'):
image = img_to_array(load_img("forest/" + filename))
image - np.array(image, dtype=float)
imL = rgb2lab(1.0 / 255 * image)[:, :,0]
X.append(imL)
imAB = rgb2lab(1.0 / 255 * image)[:, :,1:]
imAB = imAB/128
Y.append(imAB)

X = np.array(X)
Y = np.array(Y)

X = X.reshape(1, 256 , np.size(X)/256, 1)
Y = Y.reshape(1, 256, np.size(Y)/256/2, 2)

Building the neural network

model = Sequential()
model.add(InputLayer(input_shape=(256, np.size(X)/256, 1)))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', strides=1))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(2, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(2, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))

Finish model

model.compile(optimizer='rmsprop',loss='mse', metrics=['acc'])

Train the neural network

model.fit(x=X, y=Y, batch_size=100, epochs=1000)

output

Epoch 1/1000
1/1 [==============================] - 7s 7s/step - loss: 0.0214 - acc: 0.4987
Epoch 2/1000
1/1 [==============================] - 7s 7s/step - loss: 0.0214 - acc: 0.4987
Epoch 3/1000
1/1 [==============================] - 9s 9s/step - loss: 0.0214 - acc: 0.4987
Epoch 4/1000
1/1 [==============================] - 8s 8s/step - loss: 0.0214 - acc: 0.4987
.
.
.
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant