InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering
@article{Kim2021InfoNeRFRE, title={InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering}, author={Mijeong Kim and Seonguk Seo and Bohyung Han}, journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2021}, pages={12902-12911} }
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural im-plicit representation. The proposed approach minimizes potential reconstruction inconsistency that happens due to in-sufficient viewpoints by imposing the entropy constraint of the density in each ray. In addition, to alleviate the poten-tial degenerate issue when all training images are acquired from almost redundant viewpoints, we further incorporate the spatial smoothness…
Figures and Tables from this paper
23 Citations
GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency
- Computer ScienceArXiv
- 2023
This work proposes an effective method to filter out erroneous warped solutions, along with training strategies to stabilize training during optimization, and shows that the model achieves competitive results compared to state-of-the-art few-shot NeRF models.
PANeRF: Pseudo-view Augmentation for Improved Neural Radiance Fields Based on Few-shot Inputs
- Computer ScienceArXiv
- 2022
This work initial-ized the NeRF network by leveraging the expanded pseudo-views, and tuned the network by utilizing sparse-view inputs containing precise geometry and color information, finding that the model faithfully synthesizes novel-view images of superior quality and outperforms existing methods for multi-view datasets.
StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural Hints
- Computer ScienceArXiv
- 2022
Inspired by self-supervised depth estimation methods, StructNeRF is proposed, a solution to novel view synthesis for indoor scenes with sparse inputs that improves both the geometry and the view synthesis performance of NeRF without any additional training on external data.
Behind the Scenes: Density Fields for Single View Reconstruction
- Computer ScienceArXiv
- 2023
This work introduces a neural network that predicts an implicit density field from a single image that maps every location in the frustum of the image to volumetric density and can be trained through self-supervision from only video data.
SPARF: Neural Radiance Fields from Sparse and Noisy Poses
- Computer ScienceArXiv
- 2022
This work introduces Sparse Pose Adjusting Radiance Field (SPARF), to address the challenge of novel-view synthesis given only few wide-baseline input images with noisy camera poses, and sets a new state of the art in the sparse-view regime on multiple challenging datasets.
MixNeRF: Modeling a Ray with Mixture Density for Novel View Synthesis from Sparse Inputs
- Computer Science
- 2023
This work proposes MixNeRF, an effective training strategy for novel view synthesis from sparse inputs by modeling a ray with a mixture density model that outperforms other state-of-the-art methods in various standard benchmarks with superior efficiency of training and inference.
Semantic-aware Occlusion Filtering Neural Radiance Fields in the Wild
- Computer Science
- 2023
SF-NeRF is introduced, aiming to disentangle those two components with only a few images given, which exploits semantic information without any supervision, and outperforms state-of-the-art novel view synthesis methods on Phototourism dataset in a few-shot setting.
SVS: Adversarial refinement for sparse novel view synthesis
- Computer ScienceBMVC
- 2022
This work unify radiance field models with adversarial learning and perceptual losses, and provides up to 60% improvement in perceptual accuracy compared to current state-of-the-art radiances field models on this problem.
Removing Objects From Neural Radiance Fields
- Computer ScienceArXiv
- 2022
This work proposes a framework to remove objects from a NeRF representation created from an RGB-D sequence, and shows that the method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner.
Fast Learning Radiance Fields by Shooting Much Fewer Rays
- Computer ScienceArXiv
- 2022
This work reduces the redundancy by shooting much fewer rays in the multi-view volume rendering procedure which is the base for almost all radiance fields based methods and shows that shooting rays at pixels with dramatic color change not only reduces the training burden but also barely affects the accuracy of the learned radiancefiElds.
References
SHOWING 1-10 OF 41 REFERENCES
Silhouette‐Aware Warping for Image‐Based Rendering
- Computer ScienceEGSR '11
- 2011
This work formulate silhouette‐aware warps that preserve salient depth discontinuities and improves the rendering of difficult foreground objects, even when deviating from view interpolation, which results in good quality IBR for previously challenging environments.
Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
DietNeRF improves the perceptual quality of few-shot view synthesis when learned from scratch, can render novel views with as few as one observed image when pre-trained on a multi-view dataset, and produces plausible completions of completely unobserved regions.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
- Computer ScienceECCV
- 2020
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Depth synthesis and local warps for plausible image-based navigation
- Computer ScienceTOGS
- 2013
This work introduces a new IBR algorithm that is robust to missing or unreliable geometry, providing plausible novel views even in regions quite far from the input camera positions, and demonstrates novel view synthesis in real time for multiple challenging scenes with significant depth complexity.
Stereo Magnification: Learning View Synthesis using Multiplane Images
- Computer ScienceArXiv
- 2018
This paper explores an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones, and proposes a learning framework that leverages a new layered representation that is called multiplane images (MPIs).
NeRF-: Neural Radiance Fields Without Known Camera Parameters
- Computer ScienceArXiv
- 2021
It is shown that the camera parameters can be jointly optimised as learnable parameters with NeRF training, through a photometric reconstruction, and the joint optimisation pipeline can recover accurate camera parameters and achieve comparable novel view synthesis quality as those trained with COLMAP pre-computed camera parameters.
pixelNeRF: Neural Radiance Fields from One or Few Images
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields…
NeX: Real-time View Synthesis with Neural Basis Expansion
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
NeX is presented, a new approach to novel view synthesis based on enhancements of multiplane image (MPI) that can reproduce next-level view-dependent effects—in real time and proposes a hybrid implicit-explicit modeling strategy that improves upon fine detail and produces state of the art results.
Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines
- Computer Science
- 2019
An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields.
DeepView: View Synthesis With Learned Gradient Descent
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This work presents a novel approach to view synthesis using multiplane images (MPIs) that incorporates occlusion reasoning, improving performance on challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity.