Spherical Formulation of Geometric Motion Segmentation Constraints in Fisheye Cameras

@article{Mariotti2021SphericalFO,
  title={Spherical Formulation of Geometric Motion Segmentation Constraints in Fisheye Cameras},
  author={Letizia Mariotti and Ciar{\'a}n Eising},
  journal={IEEE Transactions on Intelligent Transportation Systems},
  year={2021},
  volume={23},
  pages={4201-4211}
}
We introduce a visual motion segmentation method employing spherical geometry for fisheye cameras and automated driving. Three commonly used geometric constraints in pin-hole imagery (the positive height, positive depth and epipolar constraints) are reformulated to spherical coordinates, making them invariant to specific camera configurations as long as the camera calibration is known. A fourth constraint, known as the anti-parallel constraint, is added to resolve motion-parallax ambiguity, to… 

Surround-view Fisheye Camera Perception for Automated Driving: Overview, Survey and Challenges

This work provides a unified and taxonomic treatment of commonly used fisheye camera models and discusses various perception tasks and existing lit- erature.

Near-Field Perception for Low-Speed Vehicle Automation Using Surround-View Fisheye Cameras

This work provides a detailed survey of surround-view camera systems, setting up the survey in the context of an architecture that can be decomposed into four modular components namely Recognition, Reconstruction, Relocalization, and Reorganization, which they jointly call the 4R Architecture.

Near-field Sensing Architecture for Low-Speed Vehicle Automation using a Surround-view Fisheye Camera System

This work provides a detailed survey of surround-view camera systems, setting up the survey in the context of an architecture that can be decomposed into four modular components namely Recognition, Reconstruction, Relocalization, and Reorganization, which they jointly call the 4R Architecture.

2.5D Vehicle Odometry Estimation

This paper proposes a metaphorically named 2.5D odometry, whereby the planar odometry derived from the yaw rate sensor and four wheel speed sensors is augmented by a linear model of suspension, thus augmenting the odometry model.

References

SHOWING 1-10 OF 29 REFERENCES

Spherical formulation of moving object geometric constraints for monocular fisheye cameras

This paper reformulates the three commonly used constraints in rectilinear images to spherical coordinates which is invariant to specific camera configuration once the calibration is known, and forms an additional fourth constraint, called the anti-parallel constraint, which aids the detection of objects with motion that mirrors the ego-vehicle possible.

Monocular Motion Detection Using Spatial Constraints in a Unified Manner

The knowledge about moving objects plays an important role in robot navigation and driver assistance systems. Several motion detection techniques based on the optical flow were developed in the past.

Detection of Independently Moving Objects in Non-planar Scenes via Multi-Frame Monocular Epipolar Constraint

This paper uses a combination of optical flow and particle advection to capture all motion in the video across a number of frames, in the form of particle trajectories, and applies the derived multi-frame epipolar constraint to these trajectories to determine which trajectories violate it, thus segmenting out the independently moving objects.

Detectability of Moving Objects Using Correspondences over Two and Three Frames

The detectability of a moving object is influenced by its motion, and the study of the detection limits is applied to real imagery.

Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation

This paper bridges the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework and outperforms the state-of-the-art unsupervised egomotion estimation methods by a large margin.

Moving Object Segmentation Using Optical Flow and Depth Information

Object detection is based on motion analysis of individually tracked image points, providing a motion metric which corresponds to the likelihood that the tracked point is moving, and is segmented into objects by employing a globally optimal graph-cut algorithm.

FisheyeMODNet: Moving Object detection on Surround-view Cameras for Autonomous Driving

This work proposes a CNN architecture for moving object detection using fisheye images that were captured in autonomous driving environment and designs a lightweight encoder sharing weights across sequential images to target embedded deployment.

Detection and segmentation of moving objects in highly dynamic scenes

  • A. BugeauP. Pérez
  • Computer Science
    2007 IEEE Conference on Computer Vision and Pattern Recognition
  • 2007
A new method for direct detection and segmentation of foreground moving objects in the absence of constraints and the use of p-value to validate optical flow estimates and of automatic bandwidth selection in the mean shift clustering algorithm is proposed.

Robust and Efficient Relative Pose With a Multi-Camera System for Autonomous Driving in Highly Dynamic Environments

This paper proposes a new algorithm for relative pose estimation using a multi-camera system with multiple non-overlapping cameras that works robustly even when the number of outliers is overwhelming and is able to quickly prune unpromising hypotheses and significantly improve the chance of finding inliers.

Visual motion perception for mobile robots through dense optical flow fields