Corpus ID: 202719239

Shadow Transfer: Single Image Relighting For Urban Road Scenes

@article{Carlson2019ShadowTS,
  title={Shadow Transfer: Single Image Relighting For Urban Road Scenes},
  author={Alexandra Carlson and Ram Vasudevan and Matthew Johnson-Roberson},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.10363}
}
Illumination effects in images, specifically cast shadows and shading, have been shown to decrease the performance of deep neural networks on a large number of vision-based detection, recognition and segmentation tasks in urban driving scenes. A key factor that contributes to this performance gap is the lack of `time-of-day' diversity within real, labeled datasets. There have been impressive advances in the realm of image to image translation in transferring previously unseen visual effects… Expand
Deep Photo Relighting by Integrating Both 2D and 3D Lighting Information
In this paper, we propose a novel framework called “deep photo relighting” (DPR) that can transform the lighting condition of an image for a virtual test of image detection/classification algorithm,Expand
DSRN: an Efficient Deep Network for Image Relighting
TLDR
This paper proposes an efficient, real-time framework Deep Stacked Relighting Network (DSRN) for image relighting by utilizing the aggregated features from input image at different scales and shows that if images illuminated from opposite directions are used as input, the qualitative results improve over using a single input image. Expand
2D Image Relighting with Image-to-Image Translation
TLDR
This work provides an attempt to solve the ill-posed problem of changing the position of the light source in a scene using GANs, and provides, as a tool, a simple CNN trained to identify the direction of theLight source in an image. Expand

References

SHOWING 1-10 OF 41 REFERENCES
Deep image-based relighting from optimal sparse samples
TLDR
This work presents an image-based relighting method that can synthesize scene appearance under novel, distant illumination from the visible hemisphere, from only five images captured under pre-defined directional lights, and demonstrates, on both synthetic and real scenes, that this method is able to reproduce complex, high-frequency lighting effects like specularities and cast shadows. Expand
On the Impact of Illumination-Invariant Image Pre-transformation for Contemporary Automotive Semantic Scene Understanding
TLDR
This paper presents an evaluation of illumination invariant image transforms applied to this application domain and proposes a robust approach based on using an illumination-invariant image representation, combined with the chromatic component of a perceptual colour-space to improve contemporary automotive scene understanding and segmentation. Expand
Multi-view relighting using a geometry-aware network
TLDR
This work proposes the first learning-based algorithm that can relight images in a plausible and controllable manner given multiple views of an outdoor scene using a geometry-aware neural network that utilizes multiple geometry cues and source and target shadow masks computed from a noisy proxy geometry obtained by multi-view stereo. Expand
Illumination-Aware Multi-Task GANs for Foreground Segmentation
TLDR
A robust model that allows accurately extracting the foreground even in exceptionally dark or bright scenes and in continuously varying illumination in a video sequence is presented by a triple multi-task generative adversarial network (TMT-GAN) that effectively models the semantic relationship between the dark and bright images and performs binary segmentation end-to-end. Expand
Geometric Image Synthesis
TLDR
This work proposes a trainable, geometry-aware image generation method that leverages various types of scene information, including geometry and segmentation, to create realistic looking natural images that match the desired scene structure. Expand
DeshadowNet: A Multi-context Embedding Deep Network for Shadow Removal
TLDR
An automatic and end-to-end deep neural network (DeshadowNet) to tackle shadow removal problems in a unified manner and shows that the proposed method performs favorably against several state-of-the-art methods. Expand
Distraction-Aware Shadow Detection
TLDR
Experimental results demonstrate that the proposed Distraction-aware Shadow Detection Network can boost shadow detection performance, by effectively suppressing the detection of false positives and false negatives, achieving state-of-the-art results. Expand
The Cityscapes Dataset for Semantic Urban Scene Understanding
TLDR
This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Expand
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
TLDR
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Expand
Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal
TLDR
This paper presents a multi-task perspective, which is not embraced by any existing work, to jointly learn both detection and removal in an end-to-end fashion that aims at enjoying the mutually improved benefits from each other. Expand
...
1
2
3
4
5
...