E-NeRF: Neural Radiance Fields from a Moving Event Camera

@article{Klenk2022ENeRFNR,
  title={E-NeRF: Neural Radiance Fields from a Moving Event Camera},
  author={Simone Klenk and Lukas Koestler and Davide Scaramuzza and Daniel Cremers},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.11300}
}
—Estimating neural radiance fields (NeRFs) from ”ideal” images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images may contain motion blur, and the scene may not have suitable illumination. This can cause significant problems for downstream tasks such as navigation, inspection, or visualization of the scene. To alleviate these problems, we… 

References

SHOWING 1-10 OF 45 REFERENCES

EventNeRF: Neural Radiance Fields from a Single Colour Event Camera

This paper proposes the first approach for 3D-consistent, dense and photorealistic novel view synthesis using just a single colour event stream as input and presents a neural radiance trained en-tirely in a self-supervised manner from events while pre-serving the original resolution of the colour event channels.

Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios

The first state estimation pipeline that leverages the complementary advantages of events, standard frames, and inertial measurements by fusing in a tightly coupled manner is presented, leading to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Simultaneous Optical Flow and Intensity Estimation from an Event Camera

This work proposes, to the best of the knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene, within a sliding window time interval.

Dense Depth Priors for Neural Radiance Fields from Sparse Input Views

This work uses sparse depth data that is freely available from the structure from motion (SfM) preprocessing step used to estimate camera poses to convert these sparse points into dense depth maps and uncertainty estimates, which are used to guide NeRF optimization.

NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images

The method, which the authors call RawNeRF, can reconstruct scenes from extremely noisy images captured in near-darkness and is highly robust to the zeromean distribution of raw noise.

Semi-Dense 3D Reconstruction with a Stereo Event Camera

The proposed method consists of the optimization of an energy function designed to exploit small-baseline spatio-temporal consistency of events triggered across both stereo image planes to improve the density of the reconstruction and to reduce the uncertainty of the estimation.

High Speed and High Dynamic Range Video with an Event Camera

This work proposes a novel recurrent network to reconstruct videos from a stream of events, and trains it on a large amount of simulated event data, and shows that off-the-shelf computer vision algorithms can be applied to the reconstructions and that this strategy consistently outperforms algorithms that were specifically designed for event data.

Event-Based Stereo Visual Odometry

The system successfully leverages the advantages of event-based cameras to perform visual odometry in challenging illumination conditions, such as low-light and high dynamic range, while running in real-time on a standard CPU.

SPARF: Neural Radiance Fields from Sparse and Noisy Poses

This work introduces Sparse Pose Adjusting Radiance Field (SPARF), to address the challenge of novel-view synthesis given only few wide-baseline input images with noisy camera poses, and sets a new state of the art in the sparse-view regime on multiple challenging datasets.