Object retrieval with large vocabularies and fast spatial matching

Abstract

In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora.

DOI: 10.1109/CVPR.2007.383172

Extracted Key Phrases

11 Figures and Tables

Showing 1-10 of 22 references

Randomized clustering forests for building fast and discriminative visual vocabularies

  • F Moosman, B Triggs, F Jurie
  • 2006
1 Excerpt

Enhancing RANSAC by generalized model optimization

  • O Chum, J Matas, Obdržálek
  • 2004

Proc. ECCV

  • 2004
Showing 1-10 of 1,198 extracted citations
010020020072008200920102011201220132014201520162017
Citations per Year

1,995 Citations

Semantic Scholar estimates that this publication has received between 1,815 and 2,195 citations based on the available data.

See our FAQ for additional information.