Learn More
Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by <i>inverse rendering</i>. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One(More)
We present a theoretical analysis of the relationship between incoming radiance and irradiance. Specifically, we address the question of whether it is possible to compute the incident radiance from knowledge of the irradiance at all surface orientations. This is a fundamental question in computer vision and inverse radiative transfer. We show that the(More)
We present a method, based on pre-computed light transport, for real-time rendering of objects under all-frequency, time-varying illumination represented as a high-resolution environment map. Current techniques are limited to small area lights, with sharp shadows, or large low-frequency lights, with very soft shadows. Our main contribution is to approximate(More)
We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to the lowest-frequency modes of the illumination, in order to(More)
My research focuses on finding principled representations and efficient algorithms for computer Research Interests graphics that operate well across a wide range of visual scales. Intern — Worked with Dr. Hugues Hoppe and Matt Uyttendaele. To develop new methods for interscale image interpolation. Professional Experience Intern — Collaborated with(More)
Range scanning, manual 3D editing, and other modeling approaches can provide information about the geometry of surfaces in the form of either 3D positions (e.g., triangle meshes or range images) or orientations (normal maps or bump maps). We present an algorithm that combines these two kinds of estimates to produce a new surface that approximates both. Our(More)
—We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination conditions. Since the lighting is an arbitrary function, the space of all possible images is formally infinite-dimensional. However, previous empirical work has shown that images of largely(More)
Light-field cameras have recently become available to the consumer market. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth cues from both defocus and correspondence are available(More)
We introduce <i>structured importance sampling</i>, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map. Our method handles occlusion, high-frequency lighting, and is significantly faster than alternative methods based on Monte Carlo sampling. We achieve this speedup as a result of several(More)
This paper focuses on efficient rendering based on pre-computed light transport, with realistic materials and shadows under all-frequency direct lighting such an environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface position. While image-based and synthetic methods for(More)