Toward Fast, Flexible, and Robust Low-Light Image Enhancement

@article{Ma2022TowardFF,
  title={Toward Fast, Flexible, and Robust Low-Light Image Enhancement},
  author={Long Ma and Tengyu Ma and Risheng Liu and Xin Fan and Zhongxuan Luo},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022},
  pages={5627-5636}
}
  • Long MaTengyu Ma Zhongxuan Luo
  • Published 21 April 2022
  • Physics
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Existing low-light image enhancement techniques are mostly not only difficult to deal with both visual quality and computational efficiency but also commonly invalid in unknown complex scenarios. In this paper, we develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios. To be specific, we establish a cascaded illumination learning process with weight sharing to handle this task. Considering the… 

LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field Images

Light field (LF) images containing information for multiple views have numerous applications, which can be severely affected by low-light imaging. Recent learning-based methods for low-light

Seeing Through The Noisy Dark: Toward Real-world Low-Light Image Enhancement and Denoising

In RLED-Net, a plug-and-play differentiable Latent Subspace Reconstruction Block is applied to embed the real-world images into low-rank subspaces to suppress the noise and rectify the errors, such that the impact of noise during enhancement can be effectively shrunk.

Low-light image enhancement via multistage feature fusion network

This work proposes an LLIE via multistage feature fusion network that can provide real and effective supervision and control the transmission of a small amount of critical feature information for each stage, and adds illumination guidance for image segmentation at the beginning of each stage of the network.

Rawgment: Noise-Accounted RAW Augmentation Enables Recognition in a Wide Variety of Environments

Noise-accounted RAW augmentation method dou-bles the image recognition accuracy in challenging environments only with simple training data and introduces a noise amount alignment method that calibrates the domain gap in noise property caused by the augmentation.

Fractal pyramid low-light image enhancement network with illumination information

A two-stage low-light image enhancement network called the fractal pyramid network with illumination information (FPN-IL) is proposed, which is able to make full use of contextual information and interactions of features at different scales and could be abundant.

NoiSER: Noise is All You Need for Enhancing Low-Light Images Without Task-Related Data

This paper proposes a new, magical, effective and efficient method, termed Noise SElf-Regression (NoiSER), which learns a gray-world mapping from Gaussian distribution for low-light image enhancement (LLIE), and is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.

Low-Light Image and Video Enhancement: A Comprehensive Survey and Beyond

This paper introduces Night Wenzhou, a large-scale, high-resolution video dataset, to address the lack of low-light video datasets and constructs a hierarchical taxonomy, conduct extensive key technique analysis, and performs experimental comparisons for representative LLIE approaches using the proposed datasets and the current benchmark datasets.

Unified three-pathway framework for naturalness preservation image enhancement

A unified three-pathway framework is proposed to address the aforementioned deficiencies for LLE and shows that the proposed framework outperforms state-of-the-art methods.

NoiSER: Noise is All You Need for Low-Light Image Enhancement

Compared to existing SOTA LLIE methods with access to different task-related data, NoiSER is surprisingly highly competitive in enhancement quality, yet with a much smaller model size, and much lower training and inference cost.

SufrinNet: Toward Sufficient Cross-View Interaction for Stereo Image Enhancement in The Dark

This work presents a decoupled interaction module (DIM) that aims for sufficient dual-view information interaction and presents a spatial-channel information mining block (SIMB) for intra-view feature extraction, and the benefits are twofold.

References

SHOWING 1-10 OF 39 REFERENCES

LIME: Low-Light Image Enhancement via Illumination Map Estimation

Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

Beyond Brightening Low-light Images

This work builds a simple yet effective network, which, inspired by Retinex theory, decomposes images into two components, followed a divide-and-conquer principle, and is trained with paired images shot under different exposure conditions.

Underexposed Photo Enhancement Using Deep Illumination Estimation

A new neural network for enhancing underexposed photos is presented, which introduces intermediate illumination in its network to associate the input with expected enhancement result, which augments the network's capability to learn complex photographic adjustment from expert-retouched input/output image pairs.

Low-Light Image Enhancement With Semi-Decoupled Decomposition

Experimental results on several public datasets demonstrate that the proposed Retinex-based low-light image enhancement method produces images with both higher visibility and better visual quality, which outperforms the state-of-the-art low- light enhancement methods in terms of several objective and subjective evaluation metrics.

Advancing Image Understanding in Poor Visibility Environments: A Collective Benchmark Study

The UG2+ challenge Track 2 competition in IEEE CVPR 2019 is launched, aiming to evoke a comprehensive discussion and exploration about whether and how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios.

From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement

A deep recursive band network (DRBN) is proposed to recover a linear band representation of an enhanced normal-light image with paired low/normal-light images, and then obtain an improved one by recomposing the given bands via another learnable linear transformation based on a perceptual quality-driven adversarial learning with unpaired data.

Getting to Know Low-light Images with The Exclusively Dark Dataset

Bridging the Gap between Low-Light Scenes: Bilevel Learning for Fast Adaptation

This work constructs a Retinex-induced encoder-decoder with an adaptive denoising mechanism, aiming at covering more practical cases, and provides a new hyperparameter optimization perspective to formulate a bilevel learning scheme towards general low-light scenarios.

EnlightenGAN: Deep Light Enhancement Without Paired Supervision

This paper proposes a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images.

Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement

A new context-sensitive decomposition network (CSDNet) architecture is developed to exploit the scene-level contextual dependencies on spatial scales and a lightweight CSDNet is developed by reducing the number of channels.