Modelling the Scene Dependent Imaging in Cameras with a Deep Neural Network

  title={Modelling the Scene Dependent Imaging in Cameras with a Deep Neural Network},
  author={Seonghyeon Nam and Seon Joo Kim},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  • Seonghyeon Nam, S. Kim
  • Published 26 July 2017
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
We present a novel deep learning framework that models the scene dependent image processing inside cameras. Often called as the radiometric calibration, the process of recovering RAWimages from processed images (JPEG format in the sRGB color space) is essential for many computer vision tasks that rely on physically accurate radiance values. All previous works rely on the deterministic imaging model where the color transformation stays the same regardless of the scene and thus they can only be… 

Figures and Tables from this paper

Spatially Aware Metadata for Raw Reconstruction
This work advocates a spatially aware metadata-based raw reconstruction method that is robust to local tone mapping, and yields significantly higher raw reconstruction accuracy (6 dB average PSNR improvement) compared to existing raw reconstruction methods.
CURL: Neural Curve Layers for Global Image Enhancement
A novel approach to adjust global image properties such as colour, saturation, and luminance using human-interpretable image enhancement curves, inspired by the Photoshop curves tool, which produces state-of-the-art image quality versus recently proposed deep learning approaches in both objective and perceptual metrics.
Learning sRGB-to-Raw-RGB De-rendering with Content-Aware Metadata
The experiments show that the learned sampling can adapt to the image content to produce better raw re-constructions than existing methods and an online fine-tuning strategy for the reconstruction network is described to improve results further.
Mimicking the In-Camera Color Pipeline for Camera-Aware Object Compositing
A dual-learning approach in which the reverse color transformation (from the photo to the scene) is jointly estimated and Learning of the reverse transformation is used to facilitate learning of the forward mapping, by enforcing cycle consistency of the two processes.
Color Temperature Tuning: Allowing Accurate Post-Capture White-Balance Editing
This work proposes an imaging framework that renders a small number of “tiny versions” of the original image, each with different WB color temperatures, that can significantly outperform existing solutions targeting post-capture WB editing.
Learning Raw Image Reconstruction-Aware Deep Image Compressors
This paper examines the ability of deep image compressors to be “aware” of the additional objective of raw reconstruction and describes a general framework that enables deep networks targeting image compression to jointly consider both image fidelity errors and raw reconstruction errors.
Deep Metric Color Embeddings for Splicing Localization in Severely Degraded Images
This work explores an alternative approach to splicing detection, which is potentially better suited for images in-the-wild, subject to strong compression and downsampling, and proposes a deep metric space that is sensitive to illumination color and camera white-point estimation, but on the other hand insensitive to variations in object color.
Learning Image-Adaptive 3D Lookup Tables for High Performance Photo Enhancement in Real-Time
This paper proposes to learn 3D LUTs from annotated data using pairwise or unpaired learning and learns an image-adaptive for flexible photo enhancement, which outperforms the state-of-the-art photo enhancement methods by a large margin.
CIE XYZ Net: Unprocessing Images for Low-Level Computer Vision Tasks
A deep learning framework that can unprocess a nonlinear image back to the canonical CIE XYZ image, which can then be processed by any low-level computer vision operator.
Neural Cameras: Learning Camera Characteristics for Coherent Mixed Reality Rendering
This work introduces Neural Cameras, the first approach that jointly simulates all major components of an arbitrary modern camera using neural networks, and allows for adding new cameras to the framework by learning the visual properties from a database of images that has been captured using the physical camera.


An Empirical Camera Model for Internet Color Vision
This paper analyzes the factors that contribute to the color output of a typical camera, and the use of parametric models for relating these output colors to meaningful scenes properties are explored.
Deep Joint Image Filtering
This paper proposes a learning-based approach to construct a joint filter based on Convolutional Neural Networks that can selectively transfer salient structures that are consistent in both guidance and target images and validate the effectiveness of the proposed joint filter through extensive comparisons with state-of-the-art methods.
A New In-Camera Imaging Model for Color Computer Vision and Its Application
A major limitation of the imaging model employed in conventional radiometric calibration methods is found and a new in-camera imaging model is proposed that fits well with today's cameras and is significantly more accurate than any existing methods.
Robust Radiometric Calibration and Vignetting Correction
  • S. Kim, M. Pollefeys
  • Mathematics, Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2008
A full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting is proposed and verified with both synthetic and real data, which shows significant improvement compared to existing methods.
Intrinsic images in the wild
This paper introduces Intrinsic Images in the Wild, a large-scale, public dataset for evaluating intrinsic image decompositions of indoor scenes, and develops a dense CRF-based intrinsic image algorithm for images in the wild that outperforms a range of state-of-the-art intrinsic image algorithms.
Learning a Deep Convolutional Network for Image Super-Resolution
This work proposes a deep learning method for single image super-resolution (SR) that directly learns an end-to-end mapping between the low/high-resolution images and shows that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network.
Approaching the computational color constancy as a classification problem through deep learning
Natural Image Denoising with Convolutional Networks
An approach to low-level vision is presented that combines the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models to avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference.
Automatic Photo Adjustment Using Deep Neural Networks
This article forms automatic photo adjustment in a manner suitable for deep neural networks, and introduces an image descriptor accounting for the local semantics of an image that can model local adjustments that depend on image semantics.
Do It Yourself Hyperspectral Imaging with Everyday Digital Cameras
This paper introduces an algorithm that is able to combine and convert different RGB measurements into a single hyperspectral image for both indoor and outdoor scenes by exploiting the different spectral sensitivities of different camera sensors.