• Corpus ID: 244908992

Projective Clustering Product Quantization

@article{Krishnan2021ProjectiveCP,
  title={Projective Clustering Product Quantization},
  author={Adit Krishnan and Edo Liberty},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.02179}
}
This paper suggests the use of projective clustering based product quantization for improving nearest neighbor and max-inner-product vector search (MIPS) algorithms. We provide anisotropic and quantized variants of projective clustering which outperform previous clustering methods used for this problem such as ScaNN. We show that even with comparable running time complexity, in terms of lookup-multiply-adds, projective clustering produces more quantization centers resulting in more accurate dot… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 48 REFERENCES
k-means projective clustering
TLDR
An extension of the k-means clustering algorithm for projective clustering in arbitrary subspaces is presented, taking into account the inherent trade-off between the dimension of a subspace and the induced clustering error.
Competitive Quantization for Approximate Nearest Neighbor Search
TLDR
An extensive set of experimental results and comparative evaluations show that CompQ outperforms the-state-of-the-art while retaining a comparable computational complexity.
A Monte Carlo algorithm for fast projective clustering
We propose a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces. This allows us to develop a Monte Carlo
Optimized Product Quantization for Approximate Nearest Neighbor Search
  • T. Ge, Kaiming He, Qifa Ke, Jian Sun
  • Computer Science
    2013 IEEE Conference on Computer Vision and Pattern Recognition
  • 2013
TLDR
This paper optimization product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks and presents two novel methods for optimization: a non-parametric method that alternatively solves two smaller sub-problems, and a parametric method guarantees the optimal solution if the input data follows some Gaussian distribution.
Multiscale Quantization for Fast Similarity Search
TLDR
A multiscale quantization approach for fast similarity search on large, high-dimensional datasets where a separate scalar quantizer of the residual norm scales is learned in a stochastic gradient descent framework to minimize the overall quantization error.
Quantization based Fast Inner Product Search
TLDR
Experimental results on a variety of datasets including those arising from deep neural networks show that the proposed approach significantly outperforms the existing state-of-the-art MIPS.
Composite Quantization for Approximate Nearest Neighbor Search
This paper presents a novel compact coding approach, composite quantization, for approximate nearest neighbor search. The idea is to use the composition of several elements selected from the
Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-Scale Image Retrieval
TLDR
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections by proposing a simple and efficient alternating minimization algorithm, dubbed iterative quantization (ITQ), and demonstrating an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.
End-To-End Supervised Product Quantization for Image Search and Retrieval
  • Benjamin Klein, Lior Wolf
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
To the knowledge, this is the first work to introduce a dictionary-based representation that is inspired by Product Quantization and which is learned end-to-end, and thus benefits from the supervised signal.
Spreading vectors for similarity search
TLDR
This work designs and trains a neural net which last layer forms a fixed parameter-free quantizer, such as pre-defined points of a hyper-sphere, and proposes a new regularizer derived from the Kozachenko--Leonenko differential entropy estimator to enforce uniformity and combine it with a locality-aware triplet loss.
...
1
2
3
4
5
...