Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

@article{Chibane2021StereoRF,
  title={Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes},
  author={Julian Chibane and Aayush Bansal and Verica Lazova and Gerard Pons-Moll},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={7907-7916}
}
Recent neural view synthesis methods have achieved impressive quality and realism, surpassing classical pipelines which rely on multi-view reconstruction. State-of-the-Art methods, such as NeRF [34], are designed to learn a single scene with a neural network and require dense multi-view inputs. Testing on a new scene requires re-training from scratch, which takes 2-3 days. In this work, we introduce Stereo Radiance Fields (SRF), a neural view synthesis approach that is trained end-to-end… Expand

Figures and Tables from this paper

D-NeRF: Neural Radiance Fields for Dynamic Scenes
TLDR
D-NeRF is introduced, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a single camera moving around the scene. Expand
GRF: Learning a General Radiance Field for 3D Representation and Rendering
TLDR
A simple yet powerful neural network that implicitly represents and renders 3D objects and scenes only from 2D observations, which can generate highquality and realistic novel views for novel objects, unseen categories and challenging realworld scenes. Expand
FLAME-in-NeRF : Neural control of Radiance Fields for Free View Face Animation
TLDR
This work designs a system that enables both novel view synthesis for portrait video, including the human subject and the scene background, and explicit control of the facial expressions through a low-dimensional expression representation, and imposes a spatial prior brought by 3DMM fitting to guide the network to learn disentangled control for scene appearance and facial actions. Expand
NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping
TLDR
This work proposes a fusion strategy and training pipeline to incrementally build and update neural implicit representations that enable the reconstruction of large scenes from sequential partial observations that are significantly more robust in yielding a better scene completeness given noisy inputs. Expand
Neural Rays for Occlusion-aware Image-based Rendering
TLDR
This work proposes a novel neural ray representation for the novel view synthesis task and shows how this representation can be refined by training it on the scene to achieve better renderings with only a few training steps. Expand
DIVeR: Real-time and Accurate Neural Radiance Fields with Deterministic Integration for Volume Rendering
TLDR
Comparisons to current state-of-the-art methods show that DIVeR produces models that are very small without being baked, render at or above state of theart quality, render very fast without being bake, and can be edited in natural ways. Expand
iButter: Neural Interactive Bullet Time Generator for Human Free-viewpoint Rendering
  • Liao Wang, Ziyu Wang, +5 authors Jingyi Yu
  • Computer Science
  • ACM Multimedia
  • 2021
TLDR
The iButter approach consists of a real-time preview and design stage as well as a trajectory-aware refinement stage, which jointly encodes the spatial, temporal consistency and semantic cues along the designed trajectory, achieving photo-realistic bullet-time viewing experience of human activities. Expand
RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs
  • Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, Noha Radwan
  • Computer Science
  • 2021
Neural Radiance Fields (NeRF) have emerged as a powerful representation for the task of novel view synthesis due to their simplicity and state-of-the-art performance. Though NeRF can produceExpand
Advances in neural rendering
TLDR
Loss functions for Neural Rendering Jun-Yan Zhu shows the importance of knowing the number of neurons in the system and how many neurons are firing at the same time. Expand
GeoNeRF: Generalizing NeRF with Geometry Priors
We present GeoNeRF, a generalizable photorealistic novel view synthesis method based on neural radiance fields. Our approach consists of two main stages: a geometry reasoner and a renderer. To renderExpand
...
1
2
...

References

SHOWING 1-10 OF 67 REFERENCES
D-NeRF: Neural Radiance Fields for Dynamic Scenes
TLDR
D-NeRF is introduced, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a single camera moving around the scene. Expand
Free View Synthesis
TLDR
This work presents a method for novel view synthesis from input images that are freely distributed around a scene that can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric layouts. Expand
Stereo Magnification: Learning View Synthesis using Multiplane Images
TLDR
This paper explores an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones, and proposes a learning framework that leverages a new layered representation that is called multiplane images (MPIs). Expand
pixelNeRF: Neural Radiance Fields from One or Few Images
We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fieldsExpand
Learning-based view synthesis for light field cameras
TLDR
This paper proposes a novel learning-based approach to synthesize new views from a sparse set of input views that could potentially decrease the required angular resolution of consumer light field cameras, which allows their spatial resolution to increase. Expand
Deep Stereo: Learning to Predict New Views from the World's Imagery
TLDR
This work presents a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets, and is the first to apply deep learning to the problem ofnew view synthesis from sets of real-world, natural imagery. Expand
DeepMVS: Learning Multi-view Stereopsis
TLDR
The results show that DeepMVS compares favorably against state-of-the-art conventional MVS algorithms and other ConvNet based methods, particularly for near-textureless regions and thin structures. Expand
Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines
TLDR
An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields. Expand
DeepView: View Synthesis With Learned Gradient Descent
TLDR
This work presents a novel approach to view synthesis using multiplane images (MPIs) that incorporates occlusion reasoning, improving performance on challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity. Expand
Deep blending for free-viewpoint image-based rendering
TLDR
This work presents a new deep learning approach to blending for IBR, in which held-out real image data is used to learn blending weights to combine input photo contributions, and designs the network architecture and the training loss to provide high quality novel view synthesis, while reducing temporal flickering artifacts. Expand
...
1
2
3
4
5
...