From 623acbbc37fc5745064821b294d3da5ccf5c931c Mon Sep 17 00:00:00 2001 From: ShenZheng2000 <69662345+ShenZheng2000@users.noreply.github.com> Date: Tue, 5 Oct 2021 23:47:21 +0800 Subject: [PATCH] Update README.md --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index dfa44e5d..d06a41e6 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,11 @@ # Semantic-Guided-Low-Light-Image-Enhancement This is the official Pytorch implementation for our paper "**Semantic-Guided Zero-Shot Learning for Low-Light Image/Video Enhancement.**" -A sample low-light video and its enhanced result is available at: [[Low-Light](https://www.youtube.com/watch?v=4Avy_xsczdU)] [[Enhanced](https://www.youtube.com/watch?v=rN6Tf1E-kE8)] +# Updates +- 2021.10.5: A sample low-light video and its enhanced result is available at: [[Low-Light](https://www.youtube.com/watch?v=4Avy_xsczdU)] [[Enhanced](https://www.youtube.com/watch?v=rN6Tf1E-kE8)] +- 2021.10.5: My new [repository](https://github.com/ShenZheng2000/Awesome-Low-Light-Enhancement-with-Deep-Learning) contains a collection of low-light enhancement methods. Hope you will find it helpful. +- 2021.10.4: The arxiv link is available at http://arxiv.org/abs/2110.00970 -The arxiv link is available at http://arxiv.org/abs/2110.00970 # Abstract Low-light images challenge both human perceptions and computer vision algorithms. It is crucial to make algorithms robust to enlighten low-light images for computational photography and computer vision applications such as real-time detection and segmentation tasks. This paper proposes a semantic-guided zero-shot low-light enhancement network which is trained in the absence of paired images, unpaired datasets, and segmentation annotation. Firstly, we design an efficient **enhancement factor extraction** network using depthwise separable convolution. Secondly, we propose a **recurrent image enhancement** network for progressively enhancing the low-light image. Finally, we introduce an **unsupervised semantic segmentation** network for preserving the semantic information. Extensive experiments on various benchmark datasets and a low-light video demonstrate that our model outperforms the previous state-of-the-art qualitatively and quantitatively. We further discuss the benefits of the proposed method for low-light detection and segmentation.