Low-Precision Quantization for Efficient Nearest Neighbor Search
@article{Ko2021LowPrecisionQF, title={Low-Precision Quantization for Efficient Nearest Neighbor Search}, author={Anthony Ko and Iman Keivanloo and Vihan Lakshman and Eric Schkufza}, journal={ArXiv}, year={2021}, volume={abs/2110.08919} }
Fast k-Nearest Neighbor search over real-valued vector spaces (Knn) is an important algorithmic task for information retrieval and recommendation systems. We present a method for using reduced precision to represent vectors through quantized integer values, enabling both a reduction in the memory overhead of indexing these vectors and faster distance computations at query time. While most traditional quantization techniques focus on minimizing the reconstruction error between a point and its…
References
SHOWING 1-10 OF 31 REFERENCES
Product Quantization for Nearest Neighbor Search
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2011
This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to…
Composite Quantization for Approximate Nearest Neighbor Search
- Computer ScienceICML
- 2014
This paper presents a novel compact coding approach, composite quantization, for approximate nearest neighbor search. The idea is to use the composition of several elements selected from the…
Sparse composite quantization
- Computer Science2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015
Sparse composite quantization is developed, which constructs sparse dictionaries and the benefit is that the distance evaluation between the query and the dictionary element (a sparse vector) is accelerated using the efficient sparse vector operation, and thus the cost of distance table computation is reduced a lot.
Optimized Product Quantization for Approximate Nearest Neighbor Search
- Computer Science2013 IEEE Conference on Computer Vision and Pattern Recognition
- 2013
This paper optimization product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks and presents two novel methods for optimization: a non-parametric method that alternatively solves two smaller sub-problems, and a parametric method guarantees the optimal solution if the input data follows some Gaussian distribution.
FANNG: Fast Approximate Nearest Neighbour Graphs
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
A new method for approximate nearest neighbour search on large datasets of high dimensional feature vectors, such as SIFT or GIST descriptors, is presented, which is significantly more efficient than existing state of the art methods.
Fast k nearest neighbor search using GPU
- Computer Science2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
- 2008
A CUDA implementation of the ldquobrute forcerdquo kNN search and it is shown a speed increase on synthetic and real data by up to one or two orders of magnitude depending on the data, with a quasi-linear behavior with respect to the data size in a given, practical range.
Additive Quantization for Extreme Vector Compression
- Computer Science2014 IEEE Conference on Computer Vision and Pattern Recognition
- 2014
A new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks is introduced, which leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error.
Cartesian K-Means
- Computer Science2013 IEEE Conference on Computer Vision and Pattern Recognition
- 2013
New models with a compositional parameterization of cluster centers are developed, so representational capacity increases super-linearly in the number of parameters, allowing one to effectively quantize data using billions or trillions of centers.
K-Means Hashing: An Affinity-Preserving Quantization Method for Learning Binary Compact Codes
- Computer Science2013 IEEE Conference on Computer Vision and Pattern Recognition
- 2013
A novel Affinity-Preserving K-means algorithm which simultaneously performs k-mean clustering and learns the binary indices of the quantized cells and outperforms various state-of-the-art hashing encoding methods.
Practical and Optimal LSH for Angular Distance
- Computer ScienceNIPS
- 2015
This work shows the existence of a Locality-Sensitive Hashing (LSH) family for the angular distance that yields an approximate Near Neighbor Search algorithm with the asymptotically optimal running time exponent and establishes a fine-grained lower bound for the quality of any LSH family for angular distance.