Distinctive Image Features from Scale-Invariant Keypoints

@article{Lowe2004DistinctiveIF,
  title={Distinctive Image Features from Scale-Invariant Keypoints},
  author={David G. Lowe},
  journal={International Journal of Computer Vision},
  year={2004},
  volume={60},
  pages={91-110}
}
  • D. Lowe
  • Published 1 November 2004
  • Computer Science
  • International Journal of Computer Vision
This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with… 

Evaluation of Clustering Configurations for Object Retrieval Using SIFT Features

Different configurations for clustering sets of keypoints according to their pose parameters are presented and evaluated: x and y coordinates location, scale and orientation based on Lowe’s approach.

Geometric features extraction

This chapter will discuss global and local features, and how to extract corners, edges, contours or salient regions, which are among the common features used in image analysis algorithms, and describe feature detection and a number of efficient descriptors, including SIFT, ASIFT, and SURF.

Improved SIFT-Features Matching for Object Recognition

An improvement of the original SIFT algorithm providing more reliable feature matching for the purpose of object recognition is proposed, and the main idea is to divide the features extracted from both the test and the model object image into several sub-collections before they are matched.

Incorporating Background Invariance into Feature-Based Object Recognition

  • A. SteinM. Hebert
  • Computer Science
    2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05) - Volume 1
  • 2005
Improvements to the popular scale invariant feature transform (SIFT) are suggested which incorporate local object boundary information and the resulting feature detection and descriptor creation processes are invariant to changes in background.

A Method to Enhance Homogeneous Distribution of Matched Features for Image Matching

A coarse geometric transformation between two images is calculated, through which the detected feature points in one image are projected to the other image and matched to neighboring feature points of the fixed image within a pre-determined spatial distance.

Shape Signature Matching for Object Identification Invariant to Image Transformations and Occlusion

This paper introduces a novel shape matching approach for the automatic identification of real world objects in complex scenes, formed by a set of one dimensional signals called shape signatures, using the cross-correlation metric to gauge the degree of similarity between objects.

SMD: A Locally Stable Monotonic Change Invariant Feature Descriptor

A new feature descriptor is presented that obtains invariance to a monotonic change in the intensity of the patch by looking at orders between certain pixels in the patch.

On the Repeatability and Quality of Keypoints for Local Feature-based 3D Object Retrieval from Cluttered Scenes

An algorithm for the detection of highly repeatable keypoints on 3D models and partial views of objects and an automatic scale selection technique for extracting multi-scale and scale invariant features to match objects at different unknown scales are presented.

Rotation invariant feature lines transform for image matching

A method based on utilizing feature lines in order to achieve more robust image matching, which includes feature line detection, feature vector description and matching, and the devised rotation invariant feature transform, which has the properties of rotation and scaling invariance is proposed.
...

References

SHOWING 1-10 OF 49 REFERENCES

Object recognition from local scale-invariant features

  • D. Lowe
  • Computer Science
    Proceedings of the Seventh IEEE International Conference on Computer Vision
  • 1999
Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

Local feature view clustering for 3D object recognition

  • D. Lowe
  • Computer Science
    Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001
  • 2001
This paper presents a method for combining multiple images of a 3D object into a single model representation that provides for recognition of 3D objects from any viewpoint, the generalization of models to non-rigid changes, and improved robustness through the combination of features acquired under a range of imaging conditions.

Invariant Features from Interest Point Groups

This work introduces a family of features which use groups of interest points to form geometrically invariant descriptors of image regions to ensure robust matching between images in which there are large changes in viewpoint, scale and illumi- nation.

Reliable feature matching across widely separated views

  • A. Baumberg
  • Computer Science
    Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662)
  • 2000
A robust method for automatically matching features in images corresponding to the same physical point on an object seen from two arbitrary viewpoints that is optimised for a structure-from-motion application where it wishes to ignore unreliable matches at the expense of reducing the number of feature matches.

Recognition Using Region Correspondences

  • R. BasriD. Jacobs
  • Computer Science
    Proceedings of IEEE International Conference on Computer Vision
  • 1995
The new approach combines many of the advantages of the previous two approaches, while avoiding some of their pitfalls, and makes use of region information that reflects the true shape of the object.

Probabilistic Models of Appearance for 3-D Object Recognition

This work describes how to model the appearance of a 3-D object using multiple views, learn such a model from training images, and use the model for object recognition, and demonstrates that OLIVER is capable of learning to recognize complex objects in cluttered images, while acquiring models that represent those objects using relatively few views.

Object class recognition by unsupervised scale-invariant learning

The flexible nature of the model is demonstrated by excellent results over a range of datasets including geometrically constrained classes (e.g. faces, cars) and flexible objects (such as animals).

Robust Wide Baseline Stereo from Maximally Stable Extremal Regions

The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints, is studied and an efficient and practically fast detection algorithm is presented for an affinely-invariant stable subset of extremal regions, the maximally stable extremal region (MSER).

Phase-Based Local Features

The results show that the phase-based local feature leads to better performance when dealing with common illumination changes and 2-D rotation, while giving comparable effects in terms of scale changes.

Recognition without Correspondence using Multidimensional Receptive Field Histograms

This article presents a technique where appearances of objects are represented by the joint statistics of such local neighborhood operators, which represents a new class of appearance based techniques for computer vision.