Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search

@inproceedings{DAI2020NormExplicitQI,
  title={Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search},
  author={XINYAN DAI and Xiao Yan and Kelvin Kai Wing Ng and Jie Liu and James Cheng},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2020}
}
Vector quantization (VQ) techniques are widely used in similarity search for data compression, computation acceleration and etc. Originally designed for Euclidean distance, existing VQ techniques (e.g., PQ, AQ) explicitly or implicitly minimize the quantization error. In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error. We show that quantization errors in norm have much higher influence on inner… 

Deep triplet residual quantization

Product Quantizer Aware Inverted Index for Scalable Nearest Neighbor Search

A joint optimization of the coarse and fine quantizers is suggested by substituting the original objective of the fine quantizer to end-to-end quantization distortion of the inverted index to address the raised question.

Cardinality Estimation in Inner Product Space

This article proposes a sampling-based algorithm that builds trees of vectors via transformation to a Euclidean space and dimensionality reduction in a pre-processing phase and samples vectors existing in the nodes that intersect with a search range on one of the trees.

AdaLSH: Adaptive LSH for Solving c-Approximate Maximum Inner Product Search Problem

A novel search method named Adaptive-LSH (AdaLSH) is proposed to solve MIPS and gives a better probability guarantee of success than those in conventional algorithms, bringing less running times on various datasets compared with them.

Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation

This work proposes LISA (LInear-time Self Attention), which enjoys both the effectiveness of vanilla self-attention and the efficiency of sparse attention, and outperforms the state-of-the-art efficient attention methods in both performance and speed.

Reverse Maximum Inner Product Search: How to efficiently find users who would like to buy my item?

Simpfer is proposed, a simple, fast, and exact algorithm for reverse MIPS, and it is theoretically demonstrated that Simpfer outperforms baselines employing state-of-the-art MIPS techniques.

Flashlight: Scalable Link Prediction with Effective Decoders

The Flashlight algorithm is proposed to accelerate the top scoring neighbor retrievals for HadamardMLP: a sublinear algorithm that progressively applies approximate maximum inner product search (MIPS) techniques with adaptively adjusted query embeddings that improves the inference speed of LP.

Solving Diversity-Aware Maximum Inner Product Search Efficiently and Effectively

IP-Greedy is proposed, which incorporates new early termination and skipping techniques into a greedy algorithm that can make recommendation lists diverse while preserving high inner products of user and item vectors in the lists.

Approximate Top-k Inner Product Join with a Proximity Graph

This paper addresses the problem of top-k inner product join, which, given two sets of high-dimensional vectors and a result size k, outputs k pairs of vectors that have the largest inner product.

Efficient Retrieval of Matrix Factorization-Based Top-k Recommendations: A Survey of Recent Approaches

This work surveys recent advances and state-of-the-art approaches in the literature that enable fast and accurate retrieval for MF-based personalized recommendations and includes analytical discussions of approaches along different dimensions to provide the readers with a more comprehensive understanding of the surveyed works.

References

SHOWING 1-10 OF 36 REFERENCES

Optimized Product Quantization for Approximate Nearest Neighbor Search

  • T. GeKaiming HeQifa KeJian Sun
  • Computer Science
    2013 IEEE Conference on Computer Vision and Pattern Recognition
  • 2013
This paper optimization product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks and presents two novel methods for optimization: a non-parametric method that alternatively solves two smaller sub-problems, and a parametric method guarantees the optimal solution if the input data follows some Gaussian distribution.

Quantization based Fast Inner Product Search

Experimental results on a variety of datasets including those arising from deep neural networks show that the proposed approach significantly outperforms the existing state-of-the-art MIPS.

Multiscale Quantization for Fast Similarity Search

A multiscale quantization approach for fast similarity search on large, high-dimensional datasets where a separate scalar quantizer of the residual norm scales is learned in a stochastic gradient descent framework to minimize the overall quantization error.

Revisiting Additive Quantization

It is demonstrated that the performance of AQ can be improved to surpass the state of the art by leveraging iterated local search, a stochastic local search approach known to work well for a range of NP-hard combinatorial problems.

Additive Quantization for Extreme Vector Compression

A new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks is introduced, which leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error.

Approximate Nearest Neighbor Search by Residual Vector Quantization

This paper introduces residual vector quantization based approaches that are appropriate for unstructured vectors that are compared to two state-of-the-art methods, spectral hashing and product quantization, on both structured and unstructuring datasets.

Product Quantization for Nearest Neighbor Search

This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to

Locally Optimized Product Quantization for Approximate Nearest Neighbor Search

A simple vector quantizer is presented that combines low distortion with fast search and applies it to approximate nearest neighbor (ANN) search in high dimensional spaces.

Tree quantization for large-scale similarity search and classification

In the experiments with diverse visual descriptors, tree quantization is shown to combine fast encoding and state-of-the-art accuracy in terms of the compression error, the retrieval performance, and the image classification error.

Composite Quantization for Approximate Nearest Neighbor Search

This paper presents a novel compact coding approach, composite quantization, for approximate nearest neighbor search. The idea is to use the composition of several elements selected from the