Low-Light Image and Video Enhancement Using Deep Learning: A Survey.

  title={Low-Light Image and Video Enhancement Using Deep Learning: A Survey.},
  author={Chongyi Li and Chunle Guo and Linghao Han and Jun Jiang and Mingg-Ming Cheng and Jinwei Gu and Chen Change Loy},
  journal={IEEE transactions on pattern analysis and machine intelligence},
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination. Recent advances in this area are dominated by deep learning-based solutions, where many learning strategies, network structures, loss functions, training data, etc. have been employed. In this paper, we provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues. To examine the generalization of… 
Towards Robust Low Light Image Enhancement
This work proposes a low light image enhancement solution to produce visually pleasing normal light images and demonstrates the generalization power of the approach using zero-shot cross-dataset transfer, i.e., it evaluates on datasets that were never seen during training.
LEDNet: Joint Low-light Enhancement and Deblurring in the Dark
This work introduces a novel data synthesis pipeline that models realistic low-light blurring degradations and presents the first large-scale dataset, LOL-Blur, and an effective network, named LEDNet, to perform joint lowlight enhancement and deblurring.
Interactive and Fast Low-Light Image Enhancement Algo-rithm and Application
A novel interactive algorithm based on the well-designed Gamma Curve is proposed to enrich the enhancement techniques and a multi-platform low-illumination enhancement software is explored to facilitate its application for the public.
Exposure Correction Model to Enhance Image Quality
It is shown that after applying exposure correction with the proposed model, the portrait matting quality increases significantly and the state-of-the-art result on a large-scale exposure dataset is achieved.
Low-light Image and Video Enhancement via Selective Manipulation of Chromaticity
This work introduces "Adaptive Chromaticity", which refers to an adaptive computation of image chromaticity that allows us to avoid the costly step of low-light image decomposition into illumination and reflectance, employed by many existing techniques.
Interactive Attention AI to translate low light photos to captions for night scene understanding in women safety
A Deep Learning model is developed that translates night scenes to sentences, opening new possibilities for AI applications in the safety of visually impaired women by researching a novel AI capability in the Interactive Vision-Language model for perception of the environment in the night.
Decoupled Low-Light Image Enhancement
The decoupled model facilitates the enhancement in two aspects, and demonstrates the state-of-the-art performance in both qualitative and quantitative comparisons, compared with other low-light image enhancement models.
Invertible Network for Unpaired Low-light Image Enhancement
This work proposes to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning, and designs various loss functions to ensure the stability of training and preserve more image details.


TBEFN: A Two-Branch Exposure-Fusion Network for Low-Light Image Enhancement
A novel generation-and-fusion strategy is introduced, where the enhancements for slightly and heavily distorted images are carried out respectively in the two enhancing branches, followed by a self-adaptive attention unit to perform the final fusion.
UG2 Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments
The UG$^{2+}$ challenge in IEEE CVPR 2019 aims to evoke a comprehensive discussion and exploration about how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios, and introduces three benchmark sets collected in real-world hazy, rainy, and low-light conditions.
Learning photographic global tonal adjustment with a database of input/output image pairs
This work creates a high-quality reference dataset, collects 5,000 photos, manually annotated them, and hired 5 trained photographers to retouch each picture, and introduces difference learning: this method models and predicts difference between users.
Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement
A novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network and shows that it generalizes well to diverse lighting conditions.
Learning Temporal Consistency for Low Light Video Enhancement from Single Images
A novel method to enforce the temporal stability in low light video enhancement with only static images by learning and infer motion field from a single image and synthesize short range video sequences is proposed.
RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement
This paper proposes a novel “generative” strategy for Retinex decomposition, by which the decomposition is cast as a generative problem, and a unified deep framework is proposed to estimate the latent components and perform low-light image enhancement.
Low-Light Image Enhancement via Progressive-Recursive Network
This study proposes a neural network—a progressive-recursive image enhancement network (PRIEN)—to enhance low-light images and demonstrates the advantages of the method compared with other methods, from both qualitative and quantitative perspectives.
Successive Graph Convolutional Network for Image De-raining
This paper proposes a graph convolutional networks (GCNs)-based model and introduces a simple yet effective recurrent operations to perform the de-raining process in a successive manner and achieves state-of-the-art results on both synthetic and real-world data sets.
Band Representation-Based Semi-Supervised Low-Light Image Enhancement: Bridging the Gap Between Signal Fidelity and Perceptual Quality
A deep recursive band network is proposed to recover a linear band representation of an enhanced normal-light image based on the guidance of the paired low/normal-light images to bridging the gap between the restoration knowledge of paired data and the perceptual quality preference to high-quality images.