Light field rendering

@article{Levoy1996LightFR,
  title={Light field rendering},
  author={Marc Levoy and Pat Hanrahan},
  journal={Proceedings of the 23rd annual conference on Computer graphics and interactive techniques},
  year={1996}
}
  • M. Levoy, P. Hanrahan
  • Published 1996
  • Computer Science
  • Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images… Expand
4D light-field models for view interpolation
TLDR
This paper explores two techniques for efficiently acquiring, storing and reconstructing light fields in uniform and non-uniform fashion and concludes that the first approach has sampling biases and disparity problem, but gives better reconstruction and smooth video sequence compared to second technique. Expand
Practical Light Field Rendering
TLDR
The most important contribution of this thesis is the use of light field rendering in combination with another rendering method, namely polygon rendering, which demonstrates, for the first time, that light fields can be used as a general rendering system in a practical manner. Expand
Undersampled Light Field Rendering by a Plane Sweep
TLDR
A light field rendering approach based on this principle that estimates geometry from the set of source images using multi‐baseline stereo reconstruction to supplement the existing light field rays to meet the minimum sampling requirement. Expand
Light Field Editing and Rendering
TLDR
A new graph-based pixel-wise segmentation method that, from a sparse set of user input, segments simultaneously all the views of a light field in interactive time is proposed and an automatic light field over-segmenting approach that makes use of GPUs computational power is proposed. Expand
Efficient image updates using light fields
TLDR
It is shown that appropriate portions of the light field can be cached at select ''nodal points'' that depend on the camera walk, and once spartanly and quickly cached, scenes can be rendered from any point on the walk efficiently. Expand
Layered light-field rendering with focus measurement
TLDR
A new image-based rendering method that uses input from an array of cameras and synthesizes high-quality free-viewpoint images in real-time and discusses the focus measurement scheme in both spatial and frequency domains. Expand
Camera field rendering for static and dynamic scenes
TLDR
Two new interpolation techniques are presented in this paper, the combination of which can produce reasonable rendering results for sparsely sampled real scenes and are designed as backward rendering techniques and can be combined to produce robust results. Expand
Calibration of Real Scenes for the Reconstruction of Dynamic Light Fields
TLDR
A method for calibrating a scene which includes moving or deforming objects from multiple image sequences taken with a hand–held camera, and some assumptions are made for the scene and input data. Expand
SLFT: A physically accurate framework for Tracing Synthetic Light Fields
TLDR
This paper demonstrates the equivalence of the standard light field camera representation with light slab representation for synthetic light fields and exhibits the capability of the framework to trace light fields of resolutions much higher than available in commercial plenoptic cameras. Expand
A Real-time Implementation of Rendering Light Field Imagery for Generating Point Clouds in Vision Navigation
This dissertation develops a real-time implementation of rendering perspective and refocused imagery from a light field camera for the generation of sparse point clouds using a traditional stereoExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 48 REFERENCES
Physically-valid view synthesis by image interpolation
  • S. Seitz, C. Dyer
  • Computer Science
  • Proceedings IEEE Workshop on Representation of Visual Scenes (In Conjunction with ICCV'95)
  • 1995
TLDR
It is shown that two basis views are sufficient to predict the appearance of the scene within a specific range of new viewpoints and that generating this range of views is a theoretically well-posed problem, requiring neither knowledge of camera positions nor 3D scene reconstruction. Expand
View interpolation for image synthesis
Image-space simplifications have been used to accelerate the calculation of computer graphic images since the dawn of visual simulation. Texture mapping has been used to provide a means by whichExpand
Virtualized reality: concepts and early results
TLDR
The hardware and software setup in the "studio" to make virtualized reality movies are described and examples are provided to demonstrate the effectiveness of the system. Expand
The lumigraph
This paper discusses a new method for capturing the complete appearanceof both synthetic and real world objects and scenes, representing this information, and then using this representation to renderExpand
Texture and reflection in computer generated images
TLDR
Extensions of this algorithm in the areas of texture simulation and lighting models are described, including extensions of the parametrization of a patch which defines a coordinate system which is used as a key for mapping patterns onto the surface. Expand
Pyramidal parametrics
TLDR
This paper advances a “pyramidal parametric” prefiltering and sampling geometry which minimizes aliasing effects and assures continuity within and between target images. Expand
Environment Mapping and Other Applications of World Projections
  • Ned Greene
  • Computer Science
  • IEEE Computer Graphics and Applications
  • 1986
TLDR
A uniform framework for representing and using world projections is proposed and it is argued that the best general-purpose representation is the is projection onto a cube. Expand
Head-tracked stereoscopic display using image warping
TLDR
A stereoscopic display system which requires the broadcast of only a stereo pair and sparse correspondence information, yet allows for the generation of the arbitrary views required for head-tracked stereo, and a unique visibility solution which allows the synthesized images to maintain their proper depth relationships without appealing to an underlying geometric description. Expand
Plenoptic modeling: an image-based rendering system
TLDR
An image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function is presented and a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections are introduced. Expand
Viewpoint-dependent stereoscopic display using interpolation of multiviewpoint images
TLDR
A novel approach to autostereoscopic display which can show viewpoint dependent images according to viewer's movement and the keypoint of this approach is the idea that the interpolation and reconstruction of multi-viewpoint images can provide a viewer with unlimited number of image according to his/her smooth movement. Expand
...
1
2
3
4
5
...