Learning to See in the Dark

  title={Learning to See in the Dark},
  author={Chen Chen and Qifeng Chen and Jia Xu and Vladlen Koltun},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  • Chen Chen, Qifeng Chen, V. Koltun
  • Published 4 May 2018
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Imaging in low light is challenging due to low photon count and low SNR. [] Key Method Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for…
Fast Imaging in the Dark by using Convolutional Network
A light-weight convolutional network structure which is involved with fewer parameters and has lower computation cost compared with a regular-size network is developed and is expected to make possible the implementation of more advanced edge devices, and their applications.
Seeing Motion in the Dark
By carefully designing a learning-based pipeline and introducing a new loss function to encourage temporal stability, a siamese network is trained on static raw videos, for which ground truth is available, such that the network generalizes to videos of dynamic scenes at test time.
Learning to See in Extremely Low-Light Environments with Small Data
This work proposes a method named NL2LL to collect the underexposure images and the corresponding normal exposure images by adjusting camera settings in the “normal” level of light during the daytime, and describes the regularized denoising autoencoder that is effective for restoring a low-light image.
Learning to Restore Low-Light Images via Decomposition-and-Enhancement
A frequency-based decompositionand- enhancement model that first learns to recover image objects in the low-frequency layer and then enhances high-frequency details based on the recovered image objects and outperforms state-of-the-art approaches in enhancing practical noisy low-light images.
Improving Extreme Low-Light Image Denoising via Residual Learning
This paper proposes a new residual learning based deep neural network for end-to-end extreme low-light image denoising that can not only significantly reduce the computational cost but also improve the quality over existing methods in both objective and subjective metrics.
Deep Bilateral Retinex for Low-Light Image Enhancement
A neural network is trained to generate a set of pixel-wise operators for simultaneously predicting the noise and the illumination layer, where the operators are defined in the bilateral space to have an accurate prediction of the reflectance layer in the presence of significant spatially-varying measurement noise.
CEL-Net: Continuous Exposure for Extreme Low-Light Imaging
A model for extreme low-light imaging that can continuously tune the input or output exposure level of the image to an unseen one and investigates the properties of the model and validate its performance, showing promising results.
Deep Multi-path Low-Light Image Enhancement
A novel multi-path convolutional neural network architecture is proposed for tackling noise and color shifts in low-light conditions by using different well-designed custom networks and loss functions so that luminance and chroma can be well restored.
Progressive Joint Low-Light Enhancement and Noise Removal for Raw Images
This paper proposes a low-light imaging framework that performs joint illumination adjustment, color enhancement, and denoising to tackle the problem of low image quality and significantly reduces the efforts required to fine-tune the approach for practical usage.


Deep Convolutional Denoising of Low-Light Images
This paper demonstrates how by training the same network with images having a specific peak value, the denoiser outperforms previous state-of-the-art by a large margin both visually and quantitatively.
LLNet: A deep autoencoder approach to natural low-light image enhancement
Deblurring Low-Light Images with Light Streaks
This work introduces a non-linear blur model that explicitly models light streaks and their underlying light sources, and poses them as constraints for estimating the blur kernel in an optimization framework, and automatically detects useful light streaks in the input image.
LIME: Low-Light Image Enhancement via Illumination Map Estimation
Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
RENOIR - A dataset for real low-light image noise reduction
Seeing Mt. Rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal
A novel local weighted averaging method based on ideas from “lucky imaging” minimizes blur, resampling and alignment errors, as well as effects of sensor dust, to maintain the sharpness of the original pixel grid and results in a sharp clean image.
Deep joint demosaicking and denoising
A new data-driven approach forDemosaicking and denoising is introduced: a deep neural network is trained on a large corpus of images instead of using hand-tuned filters and this network and training procedure outperform state-of-the-art both on noisy and noise-free data.
Adaptive enhancement and noise reduction in very low light-level video
The present work has been inspired by research on vision in nocturnal animals, particularly the spatial and temporal visual summation that allows these animals to see in dim light.
Burst photography for high dynamic range and low-light imaging on mobile cameras
A computational photography pipeline that captures, aligns, and merges a burst of frames to reduce noise and increase dynamic range, built atop Android's Camera2 API and written in the Halide domain-specific language (DSL).
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
This paper presents the first convolutional neural network capable of real-time SR of 1080p videos on a single K2 GPU and introduces an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output.