Seeing Motion in the Dark

@article{Chen2019SeeingMI,
  title={Seeing Motion in the Dark},
  author={Chen Chen and Qifeng Chen and Minh N. Do and Vladlen Koltun},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={3184-3193}
}
  • Chen Chen, Qifeng Chen, V. Koltun
  • Published 1 October 2019
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
Deep learning has recently been applied with impressive results to extreme low-light imaging. Despite the success of single-image processing, extreme low-light video processing is still intractable due to the difficulty of collecting raw video data with corresponding ground truth. Collecting long-exposure ground truth, as was done for single-image processing, is not feasible for dynamic scenes. In this paper, we present deep processing of very dark raw videos: on the order of one lux of… 
Learning to See in the Dark with Events
TLDR
A novel unsupervised domain adaptation network is proposed that explicitly separates domain-invariant features from the domain-specific ones to ease representation learning and achieves superior performance than various state-of-the-art architectures.
Learning Temporal Consistency for Low Light Video Enhancement from Single Images
TLDR
A novel method to enforce the temporal stability in low light video enhancement with only static images by learning and infer motion field from a single image and synthesize short range video sequences is proposed.
Optical Flow in the Dark
TLDR
This work develops a method to synthesize large-scale low-light optical flow datasets by simulating the noise model on dark raw images and collects a new optical flow dataset in raw format with a large range of exposure to be used as a benchmark.
Lighting the Darkness in the Deep Learning Era
TLDR
A comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues and a unified online platform that covers many popular LLIE methods, of which the results can be produced through a user-friendly web interface is provided.
Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset with Mechatronic Alignment
TLDR
A new dataset is compiled that contains high-quality spatially-aligned video pairs from dynamic scenes in low- and normal-light conditions using a mechatronic system to precisely control the dynamics during the video capture process, and an end-to-end framework is proposed.
Low-Light Image and Video Enhancement Using Deep Learning: A Survey.
TLDR
This paper proposes a low-light image and video dataset, in which the images and videos are taken by different mobile phones' cameras under diverse illumination conditions, and provides a unified online platform that covers many popular LLIE methods.
Abandoning the Bayer-Filter to See in the Dark
TLDR
This work presents a De-Bayer-Filter simulator based on deep neural networks to generate a monochrome raw image from the colored raw image, and a fully convolutional network is proposed to achieve the low-light image enhancement by fusing colored raw data with synthesizedmonochrome data.
Low-light Image and Video Enhancement via Selective Manipulation of Chromaticity
TLDR
This work introduces "Adaptive Chromaticity", which refers to an adaptive computation of image chromaticity that allows us to avoid the costly step of low-light image decomposition into illumination and reflectance, employed by many existing techniques.
Matching in the Dark: A Dataset for Matching Image Pairs of Low-light Scenes
TLDR
This paper considers matching images of low-light scenes, aiming to widen the frontier of SfM and visual SLAM applications and experimentally evaluated combinations of eight image-enhancing methods and eleven image matching methods consisting of classical/neural local descriptors and classical-neural initial point-matching methods.
LLISP: Low-Light Image Signal Processing Net via Two-Stage Network
TLDR
Experimental results demonstrate that the proposed method can reconstruct high-quality images from low-light raw data and replace the traditional ISP.
...
...

References

SHOWING 1-10 OF 47 REFERENCES
Learning to See in the Dark
TLDR
A pipeline for processing low-light images is developed, based on end-to-end training of a fully-convolutional network that operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data.
Deep Burst Denoising
TLDR
This paper builds a novel, multiframe architecture to be a simple addition to any single frame denoising model, and design to handle an arbitrary number of noisy input frames, and demonstrates that the DNN architecture generalizes well to image super-resolution.
DeepISP: Toward Learning an End-to-End Image Processing Pipeline
TLDR
The proposed solution achieves the state-of-the-art performance in objective evaluation of peak signal-to-noise ratio on the subtask of joint denoising and demosaicing and achieves better visual quality compared to the manufacturer ISP.
LIME: Low-Light Image Enhancement via Illumination Map Estimation
TLDR
Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
Adaptive enhancement and noise reduction in very low light-level video
TLDR
The present work has been inspired by research on vision in nocturnal animals, particularly the spatial and temporal visual summation that allows these animals to see in dim light.
MBLLEN: Low-Light Image/Video Enhancement Using CNNs
TLDR
The proposed multi-branch low-light enhancement network (MBLLEN) is found to outperform the state-of-art techniques by a large margin and can be directly extended to handle low-lights videos.
Unprocessing Images for Learned Raw Denoising
TLDR
This work presents a technique to “unprocess” images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available Internet photos.
Learning Blind Video Temporal Consistency
TLDR
An efficient approach based on a deep recurrent network for enforcing temporal consistency in a video that can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition.
Deep joint demosaicking and denoising
TLDR
A new data-driven approach forDemosaicking and denoising is introduced: a deep neural network is trained on a large corpus of images instead of using hand-tuned filters and this network and training procedure outperform state-of-the-art both on noisy and noise-free data.
RENOIR - A dataset for real low-light image noise reduction
...
...