• Publications
  • Influence
HDR image reconstruction from a single exposure using deep CNNs
TLDR
This paper addresses the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure, and proposes a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values.
Synscapes: A Photorealistic Synthetic Dataset for Street Scene Parsing
TLDR
By analyzing pre-trained, existing segmentation and detection models, it is illustrated how uncorrelated images along with a detailed set of annotations open up new avenues for analysis of computer vision systems, providing fine-grain information about how a model's performance changes according to factors such as distance, occlusion and relative object orientation.
Performance relighting and reflectance transformation with time-multiplexed illumination
TLDR
The approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval.
BRDF models for accurate and efficient rendering of glossy surfaces
TLDR
Two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF) are presented, oneinspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory, to enable representation of types of surface scattering which previous parametricmodels have had trouble modeling accurately.
Single-Frame Regularization for Temporally Stable CNNs
TLDR
This work poses temporal stability as a regularization of the cost function, formulated to account for different types of motion that can occur between frames, so that temporally stable CNNs can be trained without the need for video material or expensive motion estimation.
Evaluation of Tone Mapping Operators for HDR-Video
TLDR
Eleven tone-mapping operators intended for video processing are analyzed and evaluated with camera-captured and computer-generated high-dynamic-range content to identify the operators that can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.
Capturing and Rendering with Incident Light Fields
TLDR
This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects using a custom shader within an existing global illumination rendering system.
A comparative review of tone‐mapping algorithms for high dynamic range video
TLDR
This report sets out to summarize and categorize the research in tone‐mapping as of today, distilling the most important trends and characteristics of the tone reproduction pipeline and specifically focuses on tone-mapping of HDR video and the problems this medium entails.
Performance relighting and reflectance transformation with time-multiplexed illumination
TLDR
The approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval.
...
1
2
3
4
5
...