Vision-Only Robot Navigation in a Neural Radiance World

@article{Adamkiewicz2021VisionOnlyRN,
  title={Vision-Only Robot Navigation in a Neural Radiance World},
  author={Michal Adamkiewicz and Timothy Chen and Adam Caccavale and Rachel Gardner and Preston Culbertson and Jeannette Bohg and Mac Schwager},
  journal={IEEE Robotics and Automation Letters},
  year={2021},
  volume={PP},
  pages={1-1}
}
Neural Radiance Fields (NeRFs) have recently emerged as a powerful paradigm for the representation of natural, complex 3D scenes. NeRFs represent continuous volumetric density and RGB values in a neural network, and generate photo-realistic images from unseen camera viewpoints through ray tracing. We propose an algorithm for navigating a robot through a 3D environment represented as a NeRF using only an on-board RGB camera for localization. We assume the NeRF for the scene has been pre-trained… 

Loc-NeRF: Monte Carlo Localization using Neural Radiance Fields

Loc-NeRF is a real-time vision-based robot localization approach that combines Monte Carlo localization and Neural Radiance Fields and is able to perform localization faster than the state of the art and without relying on an initial pose estimate.

NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields

It is demonstrated that this system can be used to learn vision-based whole body navigation and ball pushing policies for a 20 degrees of freedom humanoid robot with an actuated head-mounted RGB camera, and to transfer these policies to a real robot.

NTFields: Neural Time Fields for Physics-Informed Robot Motion Planning

This work proposes Neural Time Fields (NTFields) for robot motion planning in cluttered scenarios, which represents a wave propagation model generating continuous arrival time to path solutions informed by a nonlinear Partial Differential Equation called Eikonal Equation.

PNeRF: Probabilistic Neural Scene Representations for Uncertain 3D Visual Mapping

This work investigates the problem of uncertainty integration to the learning process by focusing on training with uncertain information in a probabilistic manner and involves explicitly augmenting the training likelihood with an uncertainty term such that the learnt probability distribution of the network is minimized with respect to the training uncertainty.

LATITUDE: Robotic Global Localization with Truncated Dynamic Low-pass Filter in City-scale NeRF

LATITUDE: Global Localization with Truncated Dynamic Low-pass Filter, which introduces a two-stage localization mechanism in city-scale NeRF, and evaluates the method on both synthetic and real-world data and shows its potential applications for high-precision navigation in large- scale city scenes.

NeRF-Loc: Transformer-Based Object Localization Within Neural Radiance Fields

This work proposes a transformer- based framework NeRF-Loc to extract 3D bounding boxes of objects in NeRF scenes and designs a pair of paralleled transformer encoder branches to encode both the context and details of target objects.

iSDF: Real-Time Neural Signed Distance Fields for Robot Perception

iSDF produces more accurate reconstructions, and better approximations of collision costs and gradients useful for downstream planners in domains from navigation to manipulation, in evaluations against alternative methods on real and synthetic datasets of indoor environments.

Neural Fields for Robotic Object Manipulation from a Single Image

This work believes this to be the first work to retrieve grasping poses directly from a NeRF-based representation using a single viewpoint (RGB-only), rather than going through a secondary network and/or representation.

Multi-Object Navigation with dynamically learned neural implicit representations

This work proposes to structure neural networks with two neural implicit representations, which are learned dynamically during each episode and map the content of the scene and evaluates the agent on Multi-Object Navigation and shows the high impact of using neural implicit representation as a memory source.

Sampling-free obstacle gradients and reactive planning in Neural Radiance Fields (NeRF)

—This work investigates the use of Neural implicit representations, specifically Neural Radiance Fields (NeRF), for geometrical queries and motion planning. We show that by adding the capacity to

References

SHOWING 1-10 OF 43 REFERENCES

D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF is introduced, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a single camera moving around the scene.

NeRF-GTO: Using a Neural Radiance Field to Grasp Transparent Objects

  • Computer Science
  • 2021
This work proposes using neural radi3 ance fields (NeRF) to detect, localize, and infer the geometry of transparent ob4 jects with sufficient accuracy to find and perform grasps on transparent objects.

iNeRF: Inverting Neural Radiance Fields for Pose Estimation

iNeRF can perform categorylevel object pose estimation, including object instances not seen during training, with RGB images by inverting a NeRF model inferred from a single view.

iMAP: Implicit Mapping and Positioning in Real-Time

We show for the first time that a multilayer perceptron (MLP) can serve as the only scene representation in a real-time SLAM system for a handheld RGB-D camera. Our network is trained in live

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Voxblox: Incremental 3D Euclidean Signed Distance Fields for on-board MAV planning

This work proposes a method to incrementally build ESDFs from Truncated Signed Distance Fields (TSDFs), a common implicit surface representation used in computer graphics and vision, and shows that it can build TSDFs faster than Octomaps, and that it is more accurate than occupancy maps.

CHOMP: Gradient optimization techniques for efficient motion planning

This paper presents CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories and relax the collision-free feasibility prerequisite on input paths required by those strategies.

NeRF-: Neural Radiance Fields Without Known Camera Parameters

It is shown that the camera parameters can be jointly optimised as learnable parameters with NeRF training, through a photometric reconstruction, and the joint optimisation pipeline can recover accurate camera parameters and achieve comparable novel view synthesis quality as those trained with COLMAP pre-computed camera parameters.

Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction

A super-fast convergence approach to reconstructing the per-scene radiance field from a set of images that capture the scene with known poses, which matches, if not surpasses, NeRF's quality, yet it only takes about 15 minutes to train from scratch for a new scene.

STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in Motion with Neural Rendering

STaR is a novel method that performs Self-supervised Tracking and Reconstruction of dynamic scenes with rigid motion from multi-view RGB videos without any manual annotation and can render photorealistic novel views, where novelty is measured on both spatial and temporal axes.