Elmar Eisemann

Learn More
We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale(More)
We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element(More)
The core of our approach is built upon a pre-filtered hierarchical voxel version of the scene geometry. For efficiency, this representation is stored in the form of a compact pointer-based sparse voxel octree in the spirit of [Crassin et al. 2009]. We use small 3 bricks with values located in octree-node corners. This structure exhibits an almost(More)
Binocular disparity is an important cue for the human visual system to recognize spatial layout, both in reality and simulated virtual worlds. This paper introduces a perceptual model of disparity for computer graphics that is used to define a metric to compare a stereo image to an alternative stereo image and to estimate the magnitude of the perceived(More)
The researchers of the Max-Planck-Institut für Informatik, in collaboration with Télécom ParisTech, finalized their work on automatic alignment that will be presented on the 21st of June at the prestigious CVPR conference. The goal of this method is to register arbitrary mountain pictures and movies from the internet into a Google-Earth-like 3D Model. Such(More)
This paper introduces an accurate real-time soft shadow algorithm that uses sample based visibility. Initially, we present a GPU-based alias-free hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For closed objects, we also suppress the need for a bias. The(More)
Limited spatial resolution of current displays makes the depiction of very fine spatial details difficult. This work proposes a novel method applied to moving images that takes into account the human visual system and leads to an improved perception of such details. To this end, we display images rapidly varying over time along a given trajectory on a high(More)
This sketch paper presents an overview of "Fast Scene Voxelization and Applications" published at the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. It introduces <i>slicemaps</i> that correspond to a GPU friendly voxel representation of a scene. This voxelization is done at run-time in the order of milliseconds, even for complex and dynamic(More)
In this paper, we present a single-pass technique to voxelize the interior of watertight 3D models with high resolution grids in realtime during a single rendering pass. Further, we develop a filtering algorithm to build a density estimate that allows the deduction of normals from the voxelized model. This is achieved via a dense packing of information(More)
We present a novel rendering system for defocus blur and lens effects. It supports physically-based rendering and outperforms previous approaches by involving a novel GPU-based tracing method. Our solution achieves more precision than competing real-time solutions and our results are mostly indistinguishable from offline rendering. Our method is also more(More)