Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Will the code for inference be released? #4

Open
hermosayhl opened this issue Sep 4, 2020 · 3 comments
Open

Will the code for inference be released? #4

hermosayhl opened this issue Sep 4, 2020 · 3 comments

Comments

@hermosayhl
Copy link

Your work is great! I'm of much interest. However, I'm having trouble how to run the code for a single image or a dir with many pictures. The code written with Chainer is too hard to understand all procedures for me and the demo.py is not useful for me, because no direct image produced is saved. I need the produced images to do more experiments.
I'm looking forward to the inference code.

@satoshi-kosugi
Copy link
Owner

I am not planning to release the inference code.
If you want to save the result of the demo code, please replace line 249 in chainer_spiral/environments/env.py

                edit_demo(self.original_original_images[i] * 255, clipped_action[i])

with

                editted_image = self.photo_editor(self.original_original_images[i].copy(), clipped_action[i])
                cv2.imwrite(os.path.basename(self.file_names[i]), (editted_image * 255).astype(np.uint8))

Then, please run demo.py.

@hermosayhl
Copy link
Author

I am not planning to release the inference code.
If you want to save the result of the demo code, please replace line 249 in chainer_spiral/environments/env.py

                edit_demo(self.original_original_images[i] * 255, clipped_action[i])

with

                editted_image = self.photo_editor(self.original_original_images[i].copy(), clipped_action[i])
                cv2.imwrite(os.path.basename(self.file_names[i]), (editted_image * 255).astype(np.uint8))

Then, please run demo.py.

At first, thanks for your help!

I managed to generate enhanced results and saved them.
However, as mentioned in 《Global and Local Enhancement Networks for Paired and Unpaired Image Enhancement》,the enhanced images is somehow underexposed. "For instance, FRL fails to increase brightness sufficiently" is True. Please look up the literaure at 12/16 of pages.

Listed below are some results:

Snipaste_2020-09-05_23-31-13

Compared the satisfying results produced by your code while training:

obs_update_2100

However, low resolution images produced look pleasing.

Compared to the results of 《DeepLPF: Deep Local Parametric Filters for Image Enhancement》

Snipaste_2020-09-05_23-40-25

@satoshi-kosugi
Copy link
Owner

satoshi-kosugi commented Sep 7, 2020

Reason for underexposed results

Thank you for your information about "Global and Local Enhancement Networks for Paired and Unpaired Image Enhancement".
As mentioned in the paper, our enhanced images are somewhat underexposed. I think this is the limitation of our method, not implementation error.

Low-resolution images generated during training may be different from results by demo.py.
This is because our method generates slightly different images even if the same images are used for training.

Comparison with DeepLPF

DeepLPF is a method for paired photo enhancement, but our method is for unpaired photo enhancement.
Naturally, DeepLPF will give better results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants