E-NeRF: Neural Radiance Fields from a Moving Event Camera

@article{Klenk2022ENeRFNR,
  title={E-NeRF: Neural Radiance Fields from a Moving Event Camera},
  author={Simone Klenk and Lukas Koestler and Davide Scaramuzza and Daniel Cremers},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.11300}
}
—Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images contain motion blur and the scene may not have suitable illumination. This can cause significant problems for downstream tasks such as navigation, inspection or visualization of the scene. To alleviate these problems we present E-NeRF… 

Figures and Tables from this paper

ParticleNeRF: A Particle-Based Encoding for Online Neural Radiance Fields in Dynamic Scenes

Neural Radiance Fields (NeRFs) learn implicit representations of – typically static – environments from images. Our paper extends NeRFs to handle dynamic scenes in an online fashion. We propose

Mixed Reality Interface for Digital Twin of Plant Factory

An immersive and interactive mixed reality interface is suggested for digital twin models of smart farming, for remote work rather than simulation of components, and is constructed with UI display and a streaming background scene.

References

SHOWING 1-10 OF 33 REFERENCES

EventNeRF: Neural Radiance Fields from a Single Colour Event Camera

It is demonstrated that it is possible to learn NeRF suitable for novel-view synthesis in the RGB space from asynchronous event streams, and these models achieve high visual accuracy of the rendered novel views of challenging scenes in theRGB space, despite being trained with substantially fewer data.

Ev-NeRF: Event Based Neural Radiance Field

It is shown that the multi-view consistency of NeRF provides a power-ful self-supervision signal for eliminating the spurious measurements and extracting the consistent underlying structure despite highly noisy input.

Deblur-NeRF: Neural Radiance Fields from Blurry Images

  • Li MaXiaoyu Li P. Sander
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
Deblur-NeRF is proposed, the first method that can recover a sharp NeRF from blurry input and outperforms several baselines, and can be used on both camera motion blur and defocus blur: the two most common types of blur in real scenes.

Simultaneous Optical Flow and Intensity Estimation from an Event Camera

This work proposes, to the best of the knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene, within a sliding window time interval.

Semi-Dense 3D Reconstruction with a Stereo Event Camera

The proposed method consists of the optimization of an energy function designed to exploit small-baseline spatio-temporal consistency of events triggered across both stereo image planes to improve the density of the reconstruction and to reduce the uncertainty of the estimation.

High Speed and High Dynamic Range Video with an Event Camera

This work proposes a novel recurrent network to reconstruct videos from a stream of events, and trains it on a large amount of simulated event data, and shows that off-the-shelf computer vision algorithms can be applied to the reconstructions and that this strategy consistently outperforms algorithms that were specifically designed for event data.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images

The method, which the authors call RawNeRF, can reconstruct scenes from extremely noisy images captured in near-darkness and is highly robust to the zeromean distribution of raw noise.

Event-Based Stereo Visual Odometry

The system successfully leverages the advantages of event-based cameras to perform visual odometry in challenging illumination conditions, such as low-light and high dynamic range, while running in real-time on a standard CPU.

Time Lens++: Event-based Frame Interpolation with Parametric Nonlinear Flow and Multi-scale Fusion

This work introduces multi-scale feature-level fusion and computing one-shot non-linear inter-frame motion-which can be efficiently sampled for image warping-from events and images.