Learning Generalizable Light Field Networks from Few Images

@article{Li2022LearningGL,
  title={Learning Generalizable Light Field Networks from Few Images},
  author={Qian Li and F. Multon and Adnane Boukhayma},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.11757}
}
. We explore a new strategy for few-shot novel view synthesis based on a neural light field representation. Given a target camera pose, an implicit neural network maps each ray to its target pixel’s color directly. The network is conditioned on local ray features generated by coarse volumetric rendering from an explicit 3D feature volume. This volume is built from the input images using a 3D ConvNet. Our method achieves competitive performances on synthetic and real MVS data with respect to… 
1 Citations

Neural Mesh-Based Graphics

By training solely on a single scene, this work outperforms NPBG and performs competitively with respect to the state-of-the-art method SVS, which has been trained on the full dataset and then scene finetuned, in spite of their deeper neural renderer.

References

SHOWING 1-10 OF 53 REFERENCES

Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering

A novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural network, which results in dramatic reductions in time and memory complexity, and enables real-time rendering.

pixelNeRF: Neural Radiance Fields from One or Few Images

We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

The proposed Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance, are demonstrated by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.

Large Scale Multi-view Stereopsis Evaluation

A new multi-view stereo dataset, which is an order of magnitude larger in number of scenes and with a significant increase in diversity is proposed, containing 80 scenes of large variability and is used to evaluate the state of the art multiview stereo algorithms of Tola et al., Campbell et al, and Furukawa et al.

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

This work observes that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training, and addresses this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints, and annealing the ray sampling space during training.

MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo

This work proposes a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference, and leverages plane-swept cost volumes for geometry-aware scene reasoning, and combines this with physically based volume rendering for neural radiance field reconstruction.

GRF: Learning a General Radiance Field for 3D Representation and Rendering

A simple yet powerful neural network that implicitly represents and renders 3D objects and scenes only from 2D observations, which can generate high-quality and realistic novel views for novel objects, unseen categories and challenging real-world scenes.

Equivariant Neural Rendering

This work proposes a framework for learning neural scene representations directly from images, without 3D supervision, and introduces a loss which enforces equivariance of the scene representation with respect to 3D transformations.

Few 'Zero Level Set'-Shot Learning of Shape Signed Distance Functions in Feature Space

This work combines two types of implicit neural network conditioning mechanisms simultaneously for the first time, namely feature encoding and meta-learning, and shows that in the context of implicit reconstruction from a sparse point cloud, the proposed strategy, i.e. meta- learning in feature space, outperforms existing alternatives, namely standard supervised learning infeature space, and meta -learning in euclidean space, while still providing fast inference.
...