Visual homing with a pan-tilt based stereo camera

@inproceedings{Nirmal2013VisualHW,
  title={Visual homing with a pan-tilt based stereo camera},
  author={P. Nirmal and Damian M. Lyons},
  booktitle={Electronic Imaging},
  year={2013}
}
  • P. Nirmal, D. Lyons
  • Published in Electronic Imaging 4 February 2013
  • Computer Science, Engineering
Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational… Expand
1 Citations
Homing with stereovision
TLDR
The algorithm, Homing with Stereovision (HSV), utilizes a stereo camera mounted on a pan-tilt unit to build composite wide-field stereo images and estimate distance and orientation from the robot to the goal location. Expand

References

SHOWING 1-10 OF 21 REFERENCES
Homing in scale space
  • David Churchill, A. Vardy
  • Computer Science
  • 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2008
TLDR
A novel approach to visual homing is presented using scale change information from Scale Invariant Feature Transforms (SIFT) which is used to compute landmark correspondences and is able to determine the direction of the goal in the robotpsilas frame of reference. Expand
An Orientation Invariant Visual Homing Algorithm
TLDR
This paper presents additional mathematical justification and experimental results for the visual homing algorithm, known as Homing in Scale Space, which is far less constrained than existing methods in that it can infer thedirection of translation without any estimation of the direction of rotation. Expand
Vision-based robot homing in dynamic environments
TLDR
A robust and stable representation of the goal location is built using scale invariant features (SIFT) as visual landmarks of the scene, followed by a matching and a voting scheme, which ends up with a description that contains the most repetitive features which best represent the target location. Expand
Biologically plausible visual homing methods based on optical flow techniques
TLDR
The analysis reveals that visual homing can succeed even in the presence of many incorrect feature correspondences, and that low-frequency features are sufficient for homing. Expand
Local visual homing by matched-filter descent in image distances
TLDR
This work suggests a method based on the matched-filter concept that allows one to estimate the gradient without exploratory movements in the distance measure and investigates the relation to differential flow methods applied to the local homing problem. Expand
Where did I take that snapshot? Scene-based homing by image matching
TLDR
This work shows that most existing approaches to scene-based homing implicitly assume an isotropic landmark distribution, and proposes a homing scheme that uses parameterized displacement fields that is obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. Expand
An approach to stereo-point cloud registration using image homographies
TLDR
An improvement is developed by using salient keypoints from successive video images to calculate an affine transformation estimate of the camera location, which provides ICP an initial guess to reduce the computational time required for point cloud registration and improve the quality of registration. Expand
The role of homing in visual topological navigation
TLDR
A novel integrated indoor topological navigation framework, which combines odometry motion with visual homing algorithms, is presented, which shows robustness to scene variation and real-time performance through a series of tests conducted in four real apartments and several typical indoor scenes. Expand
Low-Level Visual Homing
TLDR
A variant of the snapshot model for insect visual homing that operates directly on two-dimensional images of the real world, and hopes to offer more biological plausibility than competing techniques because the processing applied is low-level and the information processed appears to be of the same sort of information that is processed by insects. Expand
Local visual homing by warping of two-dimensional images
  • R. Möller
  • Computer Science
  • Robotics Auton. Syst.
  • 2009
TLDR
This work describes how the performance of warping can be substantially improved by extending the method from one- to two-dimensional images, with only a moderate increase in the computational effort. Expand
...
1
2
3
...