Corpus ID: 222091017

X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation

@article{Bemana2020XFieldsIN,
  title={X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation},
  author={Mojtaba Bemana and Karol Myszkowski and Hans-Peter Seidel and Tobias Ritschel},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.00450}
}
We suggest to represent an X-Field -a set of 2D images taken across different view, time or illumination conditions, i.e., video, light field, reflectance fields or combinations thereof-by learning a neural network (NN) to map their view, time or light coordinates to 2D images. Executing this NN at new coordinates results in joint view, time or light interpolation. The key idea to make this workable is a NN that already knows the "basic tricks" of graphics (lighting, 3D projection, occlusion… Expand
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering
TLDR
This work proposes a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural implicit representation. Expand
D-NeRF: Neural Radiance Fields for Dynamic Scenes
TLDR
D-NeRF is introduced, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a single camera moving around the scene. Expand
Neural Radiance Flow for 4D View Synthesis and Video Processing
TLDR
This work uses a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene, and demonstrates that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision. Expand
Free-viewpoint Indoor Neural Relighting from Multi-view Stereo
TLDR
A convolutional network is designed around input feature maps that facilitate learning of an implicit representation of scene materials and illumination, enabling both relighting and free-viewpoint navigation and shows results of the algorithm relighting real indoor scenes and performing free- viewpoint navigation with complex and realistic glossy reflections. Expand
SIGNET: Efficient Neural Representation for Light Fields
We present a novel neural representation for light field content that enables compact storage and easy local reconstruction with high fidelity. We use a fully-connected neural network to learn theExpand
STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in Motion with Neural Rendering
TLDR
STaR is a novel method that performs Self-supervised Tracking and Reconstruction of dynamic scenes with rigid motion from multi-view RGB videos without any manual annotation and can render photorealistic novel views, where novelty is measured on both spatial and temporal axes. Expand
FLAME-in-NeRF : Neural control of Radiance Fields for Free View Face Animation
TLDR
This work designs a system that enables both novel view synthesis for portrait video, including the human subject and the scene background, and explicit control of the facial expressions through a low-dimensional expression representation, and imposes a spatial prior brought by 3DMM fitting to guide the network to learn disentangled control for scene appearance and facial actions. Expand
Fast Training of Neural Lumigraph Representations using Meta Learning
TLDR
This work develops a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time, and achieves similar or better novel view synthesis results in a fraction of the time that competing methods require. Expand
Deep 3D Mask Volume for View Synthesis of Dynamic Scenes
TLDR
A new algorithm, Deep 3D Mask Volume, is developed, which enables temporallystable view extrapolation from binocular videos of dynamic scenes, captured by static cameras, and demonstrates better temporal stability than frame-by-frame static view synthesis methods, or those that use 2D masks. Expand
Editable free-viewpoint video using a layered neural representation
TLDR
This paper proposes the first approach for editable free-viewpoint video generation for large-scale view-dependent dynamic scenes using only 16 cameras using a new layered neural representation called ST-NeRF, which achieves the disentanglement of location, deformation as well as the appearance of the dynamic entity in a continuous and self-supervised manner. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 84 REFERENCES
Monocular Neural Image Based Rendering With Continuous View Control
  • Jie Song, Xu Chen, Otmar Hilliges
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
The experiments show that both proposed components, the transforming encoder-decoder and depth-guided appearance mapping, lead to significantly improved generalization beyond the training views and in consequence to more accurate view synthesis under continuous 6-DoF camera control. Expand
Deep view synthesis from sparse photometric images
TLDR
This paper synthesizes novel viewpoints across a wide range of viewing directions (covering a 60° cone) from a sparse set of just six viewing directions, based on a deep convolutional network trained to directly synthesize new views from the six input views. Expand
Light field rendering
TLDR
This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity. Expand
Learning-based view synthesis for light field cameras
TLDR
This paper proposes a novel learning-based approach to synthesize new views from a sparse set of input views that could potentially decrease the required angular resolution of consumer light field cameras, which allows their spatial resolution to increase. Expand
Towards space: time light field rendering
TLDR
This paper proposes a novel framework, space-time light field rendering, which allows continuous exploration of a dynamic scene in both spatial and temporal domain with unsynchronized input video sequences and develops a two-stage rendering algorithm. Expand
Image based relighting using neural networks
TLDR
A regression-based method for relighting realworld scenes from a small number of images that approximates matrix segments using neural networks that model light transport as a non-linear function of light source position and pixel coordinates. Expand
Deep Appearance Maps
TLDR
This work shows how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Expand
Dataset and Pipeline for Multi-view Light-Field Video
TLDR
A dataset and a complete pipeline for Light-Field video algorithms specially tailored to process sparse and wide-baseline multi-view videos captured with a camera rig and a depth-based rendering algorithm for Dynamic Perspective Rendering are proposed. Expand
Efficient Multi‐image Correspondences for On‐line Light Field Video Processing
TLDR
This work proposes a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views, enabled by a novel correspondence algorithm converting the video streams from a sparse array of off‐the‐shelf cameras into an array of animated depth maps. Expand
Deep Stereo: Learning to Predict New Views from the World's Imagery
TLDR
This work presents a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets, and is the first to apply deep learning to the problem ofnew view synthesis from sets of real-world, natural imagery. Expand
...
1
2
3
4
5
...