• Corpus ID: 239016006

Low-Precision Quantization for Efficient Nearest Neighbor Search

@article{Ko2021LowPrecisionQF,
  title={Low-Precision Quantization for Efficient Nearest Neighbor Search},
  author={Anthony Ko and Iman Keivanloo and Vihan Lakshman and Eric Schkufza},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.08919}
}
Fast k-Nearest Neighbor search over real-valued vector spaces (Knn) is an important algorithmic task for information retrieval and recommendation systems. We present a method for using reduced precision to represent vectors through quantized integer values, enabling both a reduction in the memory overhead of indexing these vectors and faster distance computations at query time. While most traditional quantization techniques focus on minimizing the reconstruction error between a point and its… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 31 REFERENCES
Product Quantization for Nearest Neighbor Search
This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to
Composite Quantization for Approximate Nearest Neighbor Search
This paper presents a novel compact coding approach, composite quantization, for approximate nearest neighbor search. The idea is to use the composition of several elements selected from the
Sparse composite quantization
TLDR
Sparse composite quantization is developed, which constructs sparse dictionaries and the benefit is that the distance evaluation between the query and the dictionary element (a sparse vector) is accelerated using the efficient sparse vector operation, and thus the cost of distance table computation is reduced a lot.
Optimized Product Quantization for Approximate Nearest Neighbor Search
  • T. Ge, Kaiming He, Qifa Ke, Jian Sun
  • Mathematics, Computer Science
    2013 IEEE Conference on Computer Vision and Pattern Recognition
  • 2013
TLDR
This paper optimization product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks and presents two novel methods for optimization: a non-parametric method that alternatively solves two smaller sub-problems, and a parametric method guarantees the optimal solution if the input data follows some Gaussian distribution.
FANNG: Fast Approximate Nearest Neighbour Graphs
  • Ben Harwood, T. Drummond
  • Mathematics, Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
A new method for approximate nearest neighbour search on large datasets of high dimensional feature vectors, such as SIFT or GIST descriptors, is presented, which is significantly more efficient than existing state of the art methods.
Fast k nearest neighbor search using GPU
TLDR
A CUDA implementation of the ldquobrute forcerdquo kNN search and it is shown a speed increase on synthetic and real data by up to one or two orders of magnitude depending on the data, with a quasi-linear behavior with respect to the data size in a given, practical range.
Additive Quantization for Extreme Vector Compression
TLDR
A new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks is introduced, which leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error.
Cartesian K-Means
TLDR
New models with a compositional parameterization of cluster centers are developed, so representational capacity increases super-linearly in the number of parameters, allowing one to effectively quantize data using billions or trillions of centers.
K-Means Hashing: An Affinity-Preserving Quantization Method for Learning Binary Compact Codes
  • Kaiming He, Fang Wen, Jian Sun
  • Mathematics, Computer Science
    2013 IEEE Conference on Computer Vision and Pattern Recognition
  • 2013
TLDR
A novel Affinity-Preserving K-means algorithm which simultaneously performs k-mean clustering and learns the binary indices of the quantized cells and outperforms various state-of-the-art hashing encoding methods.
Practical and Optimal LSH for Angular Distance
TLDR
This work shows the existence of a Locality-Sensitive Hashing (LSH) family for the angular distance that yields an approximate Near Neighbor Search algorithm with the asymptotically optimal running time exponent and establishes a fine-grained lower bound for the quality of any LSH family for angular distance.
...
1
2
3
4
...