Neural Rays for Occlusion-aware Image-based Rendering

@article{Liu2022NeuralRF,
  title={Neural Rays for Occlusion-aware Image-based Rendering},
  author={Yuan Liu and Sida Peng and Lingjie Liu and Qianqian Wang and Peng Wang and Christian Theobalt and Xiaowei Zhou and Wenping Wang},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022},
  pages={7814-7823}
}
  • Yuan LiuSida Peng Wenping Wang
  • Published 28 July 2021
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We present a new neural representation, called Neural Ray (NeuRay), for the novel view synthesis task. Recent works construct radiance fields from image features of input views to render novel view images, which enables the generalization to new scenes. However, due to occlusions, a 3D point may be invisible to some input views. On such a 3D point, these generalization methods will include inconsistent image features from invisible views, which interfere with the radiance field construction. To… 

DynIBaR: Neural Dynamic Image-Based Rendering

This work presents a new approach that addresses the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene by adopting a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views in a scene-motion-aware manner.

Generalizable Patch-Based Neural Rendering

This work proposes a different paradigm, where no deep visual features and no NeRF-like volume rendering are needed, and outperforms the state-of-the-art on novel view synthesis of unseen scenes even when being trained with considerably less data than prior work.

Differentiable Point-Based Radiance Fields for Efficient View Synthesis

This work proposes a differentiable rendering algorithm for efficient novel view synthesis that trains two orders of magnitude faster than STNeRF and renders at a near interactive rate, while maintaining high image quality and temporal coherence even without imposing any temporal-coherency regularizers.

Efficient Neural Radiance Fields with Learned Depth-Guided Sampling

A hybrid scene representation which combines the best of implicit radiance fields and explicit depth maps for efficient rendering and the capability of the method to synthesize free-viewpoint videos of dynamic human performers in real-time is demonstrated.

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

This work observes that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training, and addresses this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints, and annealing the ray sampling space during training.

Self-improving Multiplane-to-layer Images for Novel View Synthesis

A new method for lightweight novel-view synthesis that generalizes to an arbitrary forward-facing scene that surpasses recent models in terms of both common metrics and human evaluation, with the noticeable advantage in inference speed and compactness of the inferred layered geometry.

SPARF: Neural Radiance Fields from Sparse and Noisy Poses

This work introduces Sparse Pose Adjusting Radiance Field (SPARF), to address the challenge of novel-view synthesis given only few wide-baseline input images with noisy camera poses, and sets a new state of the art in the sparse-view regime on multiple challenging datasets.

V4D: Voxel for 4D Novel View Synthesis

3D Voxel is utilized to model the 4D neural radiance field, short as V4D, where the 3D voxel has two formats and the proposed LUTs-based refinement module achieves the performance gain with little computational cost and could serve as the plug-and-play module in the novel view synthesis task.

360FusionNeRF: Panoramic Neural Radiance Fields with Joint Guidance

Experiments indicate that the proposed method can produce plausible completions of unobserved regions while preserving the features of the scene and introduces a semantic consistency loss that encourages realistic renderings of novel views.

MVSPlenOctree: Fast and Generic Reconstruction of Radiance Fields in PlenOctree from Multi-view Stereo

A generic pipeline that can efficiently reconstruct 360-degree-renderable radiance fields via multi-view stereo (MVS) inference from tens of sparse-spread out images and a robust and efficient sampling strategy for PlenOctree reconstruction, which handles occlusion robustly is presented.

References

SHOWING 1-10 OF 71 REFERENCES

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

IGNOR: Image-guided Neural Object Rendering

A learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications.

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Stereo Radiance Fields is introduced, a neural view synthesis approach that is trained end-to-end, generalizes to new scenes, and requires only sparse views at test time, andExperiments show that SRF learns structure instead of over-fitting on a scene, achieving significantly sharper, more detailed results than scene-specific models.

Deferred Neural Rendering: Image Synthesis using Neural Textures

This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates.

Neural Lumigraph Rendering

This work adopts high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images, enabling real-time rendering rates, while achieving unprecedented image quality compared to other surface methods.

MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo

This work proposes a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference, and leverages plane-swept cost volumes for geometry-aware scene reasoning, and combines this with physically based volume rendering for neural radiance field reconstruction.

Point‐Based Neural Rendering with Per‐View Optimization

A general approach is introduced that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel‐view synthesis.

Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields.

Free View Synthesis

This work presents a method for novel view synthesis from input images that are freely distributed around a scene that can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric layouts.
...