Estimating 3D Egomotion from Perspective Image Sequence

@article{Burger1990Estimating3E,
  title={Estimating 3D Egomotion from Perspective Image Sequence},
  author={Wilhelm Burger and Bir Bhanu},
  journal={IEEE Trans. Pattern Anal. Mach. Intell.},
  year={1990},
  volume={12},
  pages={1040-1058}
}
  • W. Burger, B. Bhanu
  • Published 1 November 1990
  • Computer Science
  • IEEE Trans. Pattern Anal. Mach. Intell.
The computation of sensor motion from sets of displacement vectors obtained from consecutive pairs of images is discussed. The problem is investigated with emphasis on its application to autonomous robots and land vehicles. The effects of 3D camera rotation and translation upon the observed image are discussed, particularly the concept of the focus of expansion (FOE). It is shown that locating the FOE precisely is difficult when displacement vectors are corrupted by noise and errors. A more… 
A new method on camera ego-motion estimation
  • Ding Yuan, Yalong Yu
  • Computer Science
    2013 6th International Congress on Image and Signal Processing (CISP)
  • 2013
TLDR
This paper addresses the problem of ego-motion estimation for a monocular moving camera, which is under arbitrary translation and rotation, and proposes a method based on the spatial-temporal image derivatives of an image sequence that does not need to add any special assumption to the observed scenes.
Egomotion Estimation Using Quadruples of Collinear Image Points
TLDR
In this work, a novel linear constraint that involves quantities that depend on the egomotion parameters is developed and enables the recovery of the FOE, thereby decoupling the 3D motion parameters.
Robust monocular depth perception using feature pairs and approximate motion
  • Y. Fujii, D. Wehe, T. Weymouth
  • Mathematics, Computer Science
    Proceedings 1992 IEEE International Conference on Robotics and Automation
  • 1992
TLDR
A novel approach to the problem of constructing a depth map from a sequence of monocular images that requires knowledge only of the axial translation component of the moving camera and is robust against rotational and translational motion noises.
Observability of 3D Motion
TLDR
The analysis makes it possible to compare properties of algorithms that first estimate the translation and on the basis of the translational result estimate the rotation, algorithms that do the opposite, and algorithms that estimate all motion parameters simultaneously, thus providing a sound framework for the observability of 3D motion.
Egomotion parameter computation with a neural network
TLDR
This work investigates the performance of an artificial neural network for the computation of the image position of the FOE of an optical flow field induced by an observer translation relative to a static environment.
Global methods for image motion analysis
Processing motion information is an important problem in building automated vision systems. A moving sensor can obtain knowledge about the environmental layout, its own motion, and motion of objects
Qualitative egomotion
TLDR
This paper shows how a monocular observer can estimate its 3D motion relative to the scene by using normal flow measurements in a global and qualitative way through a search technique.
Passive navigation using egomotion estimates
TLDR
A two-state approach: matching of features extracted from 2D images of a sequence at different times and egomotion parameter computation based on optimization approaches minimizing appropriate energy functions to solve the problem of passive navigation with visual means.
Using Constraint Lines for Estimating Egomotion
TLDR
A novel method for egomotion estimation is proposed that relies on the observation that optical flow vectors at pairs of points lying on lines through the FOE, exhibit particular geometric properties.
Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-Motion
TLDR
An optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera is presented and the estimated ego- motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 41 REFERENCES
On computing a 'fuzzy' focus of expansion for autonomous navigation
  • W. Burger, B. Bhanu
  • Computer Science
    Proceedings CVPR '89: IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • 1989
TLDR
The main problems of the classic FOE approach are discussed, concentrating on the details of computing the fuzzy FOE for a camera undergoing translation and rotation in 3D space.
Dynamic scene understanding for autonomous mobile robots
  • W. Burger, B. Bhanu
  • Computer Science
    Proceedings CVPR '88: The Computer Society Conference on Computer Vision and Pattern Recognition
  • 1988
TLDR
An approach to the dynamic scene analysis is presented which departs from previous work by emphasizing a qualitative strategy of reasoning and modeling that offers superior robustness and flexibility over traditional numerical techniques which are often ill-conditioned and noise-sensitive.
Estimation of Object Motion Parameters from Noisy Images
  • T. Broida, R. Chellappa
  • Mathematics, Medicine
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 1986
TLDR
An approach is presented for the estimation of object motion parameters based on a sequence of noisy images that may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images are available.
Passive navigation
TLDR
A method is proposed for determining the motion of a body relative to a fixed environment using the changing image seen by a camera attached to the body using a least-squares approach which minimizes some measure of the discrepancy between the measured flow and that predicted from the computed motion parameters.
Processing translational motion sequences
  • D. Lawton
  • Mathematics, Computer Science
    Comput. Vis. Graph. Image Process.
  • 1983
TLDR
This procedure, which requires no restrictions on the direction of motion, nor the location and shape of environmental objects, has been applied successfully to real-world image sequences from several different task domains.
Determining the instantaneous direction of motion from optical flow generated by a curvilinearly moving observer
Abstract A method is described capable of decomposing the optical flow into its rotational and translational components. The translational component is extracted implicitly by locating the focus of
Computing Dense Displacement Fields With Confidence Measures In Scenes Containing Occlusion
  • P. Anandan
  • Mathematics, Engineering
    Other Conferences
  • 1985
Matching successive frames of a dynamic image sequence using area correlation has been studied for many years by researchers in machine vision. Most of these efforts have gone into improving the
Determining Three-Dimensional Motion and Structure from Optical Flow Generated by Several Moving Objects
  • Gilad Adiv
  • Computer Science, Medicine
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 1985
TLDR
A new approach for the interpretation of optical flow fields is presented, where the flow field is partitioned into connected segments of flow vectors, where each segment is consistent with a rigid motion of a roughly planar surface.
Determining The Instantaneous Direction Of Motion From Optical Flow Generated By A Curvilinearly Moving Observer
  • K. Prazdny
  • Mathematics, Engineering
    Other Conferences
  • 1981
A method is described capable of decomposing the optical flow into its rotational and translational components. The translational component is extracted implicitly by locating the focus of expansion
Disparity Analysis of Images
  • S. Barnard, W. Thompson
  • Mathematics, Medicine
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 1980
TLDR
An algorithm for matching images of real world scenes is presented, which quickly converges to good estimates of disparity, which reflect the spatial organization of the scene.
...
1
2
3
4
5
...