• Corpus ID: 235624060

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields

@article{Park2021HyperNeRFAH,
  title={HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields},
  author={Keunhong Park and U. Sinha and Peter Hedman and Jonathan T. Barron and Sofien Bouaziz and Dan B. Goldman and Ricardo Martin-Brualla and Steven M. Seitz},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.13228}
}
Fig. 1. Neural Radiance Fields (NeRF) [Mildenhall et al. 2020] when endowed with the ability to handle deformations [Park et al. 2020] are able to capture non-static human subjects, but often struggle in the presence of significant deformation or topological variation, as evidenced in (b). By modeling a family of shapes in a high dimensional space shown in (d), our Hyper-NeRF model is able to handle topological variation and thereby produce more realistic renderings and more accurate geometric… 

Neural Implicit Surfaces in Higher Dimension

This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of

NeRF, meet differential geometry!

TLDR
This work shows how a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions can be used to natively encourage the regularity of surfaces (by means of Gaussian and Mean Curvatures) making it possible, for example, to learn surfaces from a very limited number of views.

BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering

TLDR
BungeeNeRF is introduced, a progressive neural radiance field that achieves level-of-detail rendering across drastically varied scales and its support for high-quality rendering in different levels of detail is demonstrated.

DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes

TLDR
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup ( 100× faster ) with on-par high-fidelity results compared to the previous state-of-the-art approaches.

Neural Deformable Voxel Grid for Fast Optimization of Dynamic View Synthesis

TLDR
Experimental results show that the proposed fast deformable radiance field method achieves comparable performance to D- NeRF using only 20 minutes for training, which is more than 70 × faster than D-NeRF, clearly demonstrating the efficiency of the proposed method.

CoNeRF: Controllable Neural Radiance Fields

TLDR
The key idea is to treat the attributes as latent variables that are regressed by the neural network given the scene encoding, which leads to a few-shot learning framework, where attributes are discovered automatically by the framework, when annotations are not provided.

Animatable Neural Radiance Fields from Monocular RGB Video

TLDR
The approach extends neural radiance fields (NeRF) to the dynamic scenes with human movements via introducing explicit pose-guided deformation while learning the scene representation network to compensate for inaccurate pose estimation.

NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction

TLDR
NeRFu-sion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achievecient large-scale reconstruction and photo-realistic rendering, is proposed.

NeRFocus: Neural Radiance Field for 3D Synthetic Defocus

TLDR
A novel thinlens-imaging-based NeRF framework that can directly render various 3D defocus effects, dubbed NeRFocus is proposed and an efficient probabilistic training (p-training) strategy is designed to simplify the training process vastly.

NDF: Neural Deformable Fields for Dynamic Human Modelling

TLDR
The proposed Neural Deformable Fields (NDF), a new representation for dynamic human digitization from a multi-view video, can synthesize the digitized performer with novel views and novel poses with a detailed and reasonable dynamic appearance.
...

References

SHOWING 1-10 OF 53 REFERENCES

Nerfies: Deformable Neural Radiance Fields

TLDR
This work presents the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones and shows that it faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.

NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

TLDR
A learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs, and applies it to internet photo collections of famous landmarks, to demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

TLDR
This work introduces a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Occupancy Networks: Learning 3D Reconstruction in Function Space

TLDR
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

TLDR
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF’s ability to represent fine details, while also being 7% faster than NeRF and half the size.

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

TLDR
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

ShapeFlow: Learnable Deformations Among 3D Shapes

TLDR
This work parametrize the deformation between geometries as a learned continuous flow field via a neural network and shows that such deformations can be guaranteed to have desirable properties, such as be bijectivity, freedom from self-intersections, or volume preservation.

Neural Point-Based Graphics

We present a new point-based approach for modeling the appearance of real scenes. The approach uses a raw point cloud as the geometric representation of a scene, and augments each point with a

DeepVoxels: Learning Persistent 3D Feature Embeddings

TLDR
This work proposes DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry, based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying3D scene structure.
...