Event-Based Motion Segmentation by Motion Compensation

@article{Stoffregen2019EventBasedMS,
  title={Event-Based Motion Segmentation by Motion Compensation},
  author={Timo Stoffregen and Guillermo Gallego and Tom Drummond and Lindsay Kleeman and Davide Scaramuzza},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={7243-7252}
}
In contrast to traditional cameras, whose pixels have a common exposure time, event-based cameras are novel bio-inspired sensors whose pixels work independently and asynchronously output intensity changes (called "events"), with microsecond resolution. Since events are caused by the apparent motion of objects, event-based cameras sample visual information based on the scene dynamics and are, therefore, a more natural fit than traditional cameras to acquire motion, especially at high speeds… 

Event-based Motion Segmentation with Spatio-Temporal Graph Cuts

TLDR
This work develops a method to identify independently moving objects acquired with an event-based camera and jointly solves two sub-problems, namely event-cluster assignment (labeling) and motion model fitting, in an iterative manner by exploiting the structure of the input event data in the form of a spatio-temporal graph.

Globally-Optimal Event Camera Motion Estimation

TLDR
The present paper looks at fronto-parallel motion estimation of an event camera, and derives a globally optimal solution to this generally non-convex problem, and removes the dependency on a good initial guess.

Event-based Motion Segmentation by Cascaded Two-Level Multi-Model Fitting

  • Xiuyuan LuYi ZhouS. Shen
  • Computer Science
    2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2021
TLDR
This paper presents a cascaded two-level multi-model fitting method for identifying independently moving objects with a monocular event camera that leads to efficient and accurate event-wise motion segmentation that cannot be achieved by any of them alone.

0-MMS: Zero-Shot Multi-Motion Segmentation With A Monocular Event Camera

TLDR
An approach for monocular multi-motion segmentation, which combines bottom-up feature tracking and top-down motion compensation into a unified pipeline, which is the first of its kind to the authors' knowledge is presented.

Motion segmentation and tracking for integrating event cameras

TLDR
This paper presents a new scheme for event compression that has many analogues to traditional framed video compression techniques and introduces an application "in the loop" framework, where the application dynamically informs the camera how sensitive each pixel should be, based on the efficacy of the most recent data received.

MOMS with Events: Multi-Object Motion Segmentation With Monocular Event Cameras

TLDR
This work proposes a solution to multi-object motion segmentation using a combination of classical optimization methods along with deep learning and does not require prior knowledge of the 3D motion and the number and structure of objects.

High Frame Rate Video Reconstruction Based on an Event Camera

TLDR
This work proposes a simple yet effective approach to reconstruct high-quality and high frame rate sharp videos by associating event data to a latent sharp image and provides a new and more efficient solver to minimize the proposed energy model.

Globally-Optimal Contrast Maximisation for Event Cameras

TLDR
The present work looks at several motion estimation problems with event cameras, and derives globally optimal solutions to these generally non-convex problems, which removes the dependency on a good initial guess plaguing existing methods.

A Spatial-Motion-Segmentation Algorithm by Fusing EDPA and Motion Compensation

TLDR
A spatial-motion-segmentation algorithm by fusing the events-dimensionality-preprocessing algorithm (EDPA) and the volume of warped events (VWE) is proposed and the experimental results validate the feasibility of the proposed algorithm.

Moving Object Detection for Event-based Vision using Graph Spectral Clustering

TLDR
An unsupervised Graph Spectral Clustering technique for Moving Object Detection in Event-based data (GSCEventMOD) is presented and it is shown how the optimum number of moving objects can be automatically determined.
...

References

SHOWING 1-10 OF 46 REFERENCES

Independent motion detection with event-driven cameras

TLDR
The method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present, and is robust to changes in speed of both the head and the target.

Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps

TLDR
This paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map built via classic dense reconstruction pipelines, and tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency.

EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time

TLDR
This work introduces the problem of event-based multi-view stereo (EMVS) for event cameras and proposes a solution that elegantly exploits two inherent properties of an event camera: its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation— and the fact that it provides continuous measurements as the sensor moves.

A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation

TLDR
The main idea of the framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events.

Event-Based Moving Object Detection and Tracking

TLDR
This paper presents a novel event stream representation which enables us to utilize information about the dynamic (temporal)component of the event stream, and demonstrates the framework on the task of independent motion detection and tracking, where it is used to locate differently moving objects in challenging situations of very fast motion.

Focus Is All You Need: Loss Functions for Event-Based Vision

TLDR
This work presents a collection and taxonomy of twenty two objective functions to analyze event alignment in motion compensation approaches, and concludes that the variance, the gradient and the Laplacian magnitudes are among the best loss functions.

EKLT: Asynchronous Photometric Feature Tracking Using Events and Frames

TLDR
EKLT is the first principled method that uses intensity measurements directly, based on a generative event model within a maximum-likelihood framework, that produces feature tracks that are more accurate than the state of the art, across a wide variety of scenes.

EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time

TLDR
Evo, an event-based visual odometry algorithm that successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional map of the environment, makes significant progress in simultaneous localization and mapping.

Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera

TLDR
To the best of the knowledge, this is the first algorithm provably able to track a general 6D motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data.

Event Cameras, Contrast Maximization and Reward Functions: An Analysis

TLDR
The choice of reward used in contrast maximization is examined, a classification of different rewards is proposed and how a reward can be constructed that is more robust to noise and aperture uncertainty is shown.