• Corpus ID: 8130260

Stacked Quantizers for Compositional Vector Compression

  title={Stacked Quantizers for Compositional Vector Compression},
  author={Julieta Martinez and Holger H. Hoos and J. Little},
Recently, Babenko and Lempitsky introduced Additive Quantization (AQ), a generalization of Product Quantization (PQ) where a non-independent set of codebooks is used to compress vectors into small binary codes. Unfortunately, under this scheme encoding cannot be done independently in each codebook, and optimal encoding is an NP-hard problem. In this paper, we observe that PQ and AQ are both compositional quantizers that lie on the extremes of the codebook dependence-independence assumption, and… 

Figures from this paper

Beyond Product Quantization: Deep Progressive Quantization for Image Retrieval
A deep progressive quantization (DPQ) model, as an alternative to PQ, for large scale image retrieval, that is trained once for different code lengths and therefore requires less computation time.
Stacked Product Quantization for Nearest Neighbor Search on Large Datasets
This paper proposes a new vector quantization method called SPQ which combines the strength of PQ and SQ and demonstrates that SPQ can generate codebook and encoding faster than SQ while maintain the same quantization error.
LSQ++: Lower Running Time and Higher Recall in Multi-codebook Quantization
This work benchmarks a series of MCQ baselines on an equal footing and provides an analysis of their recall-vs-running-time performance, and observes that local search quantization (LSQ) is in practice much faster than its competitors, but is not the most accurate method in all cases.
Deep Recurrent Quantization for Generating Sequential Binary Codes
This work proposes a Deep Recurrent Quantization (DRQ) architecture which can generate sequential binary codes and achieves comparable or even better performance compared with the state-of-the-art for image retrieval.
Autoregressive Image Generation using Residual Quantization
This study proposes the two-stage framework, which consists of Residual-Quantized VAE (RQ-VAE) and RQ-Transformer, to effectively generate high-resolution images and outperforms the existing AR models on various benchmarks of unconditional and conditional image generation.
3D Self-Attention for Unsupervised Video Quantization
This paper makes a first attempt to combine quantization method with video retrieval called 3D-UVQ, which obtains high retrieval accuracy with low storage cost and demonstrates that the method significantly outperforms the state-of-the-arts.
Iteratively Multiple Projections Optimization for Product Quantization in Nearest Neighbor Search
A novel distortion model is proposed which can jointly train the multiple projections and fine quantizers of each subspace and the partition of the training set are optimized iteratively and verified when the spaces decomposition and the quantizers are optimized jointly.
Polysemous Codes
Polysemous codes are introduced, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance, and their design is inspired by algorithms introduced in the 90's to construct channel-optimized vector quantizers.
Approximate Search with Quantized Sparse Representations
This paper proposes to approximate database vectors by constrained sparse coding, where possible atom weights are restricted to belong to a finite subset, thereby allowing us to index a large collection such as the BIGANN billion-sized benchmark.


Optimized Product Quantization
  • T. Ge, Kaiming He, Qifa Ke, Jian Sun
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2014
This paper optimize PQ by minimizing quantization distortions w.r.t the space decomposition and the quantization codebooks, and evaluates the optimized product quantizers in three applications: compact encoding for exhaustive ranking, inverted multi-indexing for non-exhaustive search, and compacting image representations for image retrieval.
Iterative quantization: A procrustean approach to learning binary codes
A simple and efficient alternating minimization scheme for finding a rotation of zero- centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube is proposed.
Additive Quantization for Extreme Vector Compression
A new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks is introduced, which leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error.
Improving the Fisher Kernel for Large-Scale Image Classification
In an evaluation involving hundreds of thousands of training images, it is shown that classifiers learned on Flickr groups perform surprisingly well and that they can complement classifier learned on more carefully annotated datasets.
Cartesian K-Means
New models with a compositional parameterization of cluster centers are developed, so representational capacity increases super-linearly in the number of parameters, allowing one to effectively quantize data using billions or trillions of centers.
Return of the Devil in the Details: Delving Deep into Convolutional Nets
It is shown that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost, and it is identified that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance.
High-dimensional signature compression for large-scale image classification
This work reports results on two large databases — ImageNet and a dataset of lM Flickr images — showing that it can reduce the storage of the authors' signatures by a factor 64 to 128 with little loss in accuracy and integrating the decompression in the classifier learning yields an efficient and scalable training algorithm.
Aggregating local descriptors into a compact image representation
This work proposes a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation, and shows how to jointly optimize the dimension reduction and the indexing algorithm.
PiCoDes: Learning a Compact Code for Novel-Category Recognition
PICODES: a very compact image descriptor which nevertheless allows high performance on object category recognition and an alternation scheme and convex upper bound which demonstrate excellent performance in practice are presented.
Spectral Hashing
The problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard and a spectral method is obtained whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian.