Learn More
We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale(More)
We present a GPU-based real-time rendering method that simulates high-quality depth-of-field effects, similar in quality to multiview accumulation methods. Most real-time approaches have difficulties to obtain good approximations of visibility and view-dependent shading due to the use of a single view image. Our method also avoids the multiple rendering of(More)
Figure 1: An example of annotated photograph superposed on synthetic panorama. The researchers of the Max-Planck-Institut für Informatik, in collaboration with Télécom Paris-Tech, finalized their work on automatic alignment that will be presented on the 21st of June at the prestigious CVPR conference. The goal of this method is to register arbitrary(More)
We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element(More)
We present a novel rendering system for defocus blur and lens effects. It supports physically-based rendering and outperforms previous approaches by involving a novel GPU-based tracing method. Our solution achieves more precision than competing real-time solutions and our results are mostly indistinguishable from offline rendering. Our method is also more(More)
Figure 1: Real-time indirect illumination (25-70 fps on a GTX480): We rely on a voxel-based cone tracing to ensure efficient integration of 2-bounce illumination and support diffuse and glossy materials on complex scenes. (Right scene courtesy of G. Abstract Indirect illumination is an important element for realistic image synthesis, but its computation is(More)
Limited spatial resolution of current displays makes the depiction of very fine spatial details difficult. This work proposes a novel method applied to moving images that takes into account the human visual system and leads to an improved perception of such details. To this end, we display images rapidly varying over time along a given trajectory on a high(More)
This paper introduces an accurate real-time soft shadow algorithm that uses sample based visibility. Initially, we present a GPU-based alias-free hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For closed objects, we also suppress the need for a bias. The(More)
Pixel processing is becoming increasingly expensive for real-time applications due to the complexity of today's shaders and high-resolution framebuffers. However, most shading results are spatially or temporally coherent, which allows for sparse sampling and reuse of neighboring pixel values. This paper proposes a simple framework for spatio-temporal(More)