• Publications
  • Influence
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Learning-based view synthesis for light field cameras
TLDR
This paper proposes a novel learning-based approach to synthesize new views from a sparse set of input views that could potentially decrease the required angular resolution of consumer light field cameras, which allows their spatial resolution to increase.
On the relationship between radiance and irradiance: determining the illumination from images of a convex Lambertian object.
TLDR
This work derives a simple closed-form formula for the irradiance in terms of spherical harmonic coefficients of the incident illumination and demonstrates that the odd-order modes of the lighting with order greater than 1 are completely annihilated, contradicting a theorem that is due to Preisendorfer.
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
TLDR
An approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities is suggested.
A signal-processing framework for inverse rendering
TLDR
This work introduces a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting.
An efficient representation for irradiance environment maps
TLDR
A simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and image-based rendering are considered.
Depth from Combining Defocus and Correspondence Using Light-Field Cameras
TLDR
A novel simple and principled algorithm is presented that computes dense depth estimation by combining both defocus and correspondence depth cues, and shows how to combine the two cues into a high quality depth map, suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction.
Deep high dynamic range imaging of dynamic scenes
TLDR
A convolutional neural network is used as the learning model and three different system architectures are compared to model the HDR merge process to demonstrate the performance of the system by producing high-quality HDR images from a set of three LDR images.
Occlusion-Aware Depth Estimation Using Light-Field Cameras
TLDR
A depth estimation algorithm that treats occlusions explicitly, the method also enables identification of occlusion edges, which may be useful in other applications and outperforms current state-of-the-art light-field depth estimation algorithms, especially near Occlusion boundaries.
Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines
TLDR
An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields.
...
1
2
3
4
5
...