Christian Hofsetz

Learn More
Image-based rendering (IBR) involves constructing an image from a new viewpoint, using several input images from different viewpoints. Our approach is to acquire or estimate the depth for each pixel of each input image. We then reconstruct the new view from the resulting collection of 3D points. When rendering images from photographs, acquiring and(More)
This paper presents a work-in-progress. The objective of our study is to derive an optimal design for high-performance rendering of irregular-grid volume data on the increasingly popular, distributed shared-memory parallel supercomputers. We experiment with a multi-threaded volume rendering algorithm for three-dimensional unstructured-grid data and discuss(More)
We present a system for rendering novel viewpoints from a set of calibrated and silhouette-segmented images using the visual hull together with multi-view stereo. The visual hull predicted from the object silhouettes is used to restrict the search range of the multi-view stereo. This reduces redundant computation and the possibility of incorrect matches.(More)
Images synthesized by light field rendering exhibit aliasing artifacts when the light field is undersampled; adding new light field samples improves the image quality and reduces aliasing but new samples are expensive to acquire. Light field rays are traditionally gathered directly from the source images, but new rays can also be inferred through geometry(More)
This paper presents a novel 3D painting system that allows interactive painting on normal maps. The process of creating a highly detailed model and later extracting normal maps is slow and prone to artifacts. We propose an interactive framework where the user paints directly on normal maps while visualizing the results as they would appear in the target(More)
Image-Based Rendering is an exciting new field, which lies in between Computer Graphics and Computer Vision. We believe that the more we use the knowledge from Computer Vision in our graphics rendering algorithms, the better our final rendered images will be. This dissertation presents a framework to identify what information from computer vision is(More)
In this paper, we introduce the concept of hyperlines in light fields. When represented by the two-plane parameterization, hyperlines are 2-Degree-of-Freedom (2-DOF) linear entities in the 4-D light field. The light field can be thought of as a dual space of the world space. In this dual space, cameras appear as hyperlines with heterogeneous colors which we(More)
We propose to look at light fields from a dual space point of view. The advantage, in addition to revealing some new insights, is a framework that combines the benefits of many existing works. Using the well known two-plane-parameterization, we derive the duality between the 4-D light field and the 3-D world space. In the dual light field, rays become hyper(More)