• Corpus ID: 8130260

Stacked Quantizers for Compositional Vector Compression

@article{Martinez2014StackedQF,
  title={Stacked Quantizers for Compositional Vector Compression},
  author={Julieta Martinez and Holger H. Hoos and J. Little},
  journal={ArXiv},
  year={2014},
  volume={abs/1411.2173}
}
Recently, Babenko and Lempitsky introduced Additive Quantization (AQ), a generalization of Product Quantization (PQ) where a non-independent set of codebooks is used to compress vectors into small binary codes. Unfortunately, under this scheme encoding cannot be done independently in each codebook, and optimal encoding is an NP-hard problem. In this paper, we observe that PQ and AQ are both compositional quantizers that lie on the extremes of the codebook dependence-independence assumption, and… 

Figures from this paper

Beyond Product Quantization: Deep Progressive Quantization for Image Retrieval
TLDR
A deep progressive quantization (DPQ) model, as an alternative to PQ, for large scale image retrieval, that is trained once for different code lengths and therefore requires less computation time.
Stacked Product Quantization for Nearest Neighbor Search on Large Datasets
TLDR
This paper proposes a new vector quantization method called SPQ which combines the strength of PQ and SQ and demonstrates that SPQ can generate codebook and encoding faster than SQ while maintain the same quantization error.
Deep Recurrent Quantization for Generating Sequential Binary Codes
TLDR
This work proposes a Deep Recurrent Quantization (DRQ) architecture which can generate sequential binary codes and achieves comparable or even better performance compared with the state-of-the-art for image retrieval.
Autoregressive Image Generation using Residual Quantization
TLDR
This study proposes the two-stage framework, which consists of Residual-Quantized VAE (RQ-VAE) and RQ-Transformer, to effectively generate high-resolution images and outperforms the existing AR models on various benchmarks of unconditional and conditional image generation.
3D Self-Attention for Unsupervised Video Quantization
TLDR
This paper makes a first attempt to combine quantization method with video retrieval called 3D-UVQ, which obtains high retrieval accuracy with low storage cost and demonstrates that the method significantly outperforms the state-of-the-arts.
Iteratively Multiple Projections Optimization for Product Quantization in Nearest Neighbor Search
TLDR
A novel distortion model is proposed which can jointly train the multiple projections and fine quantizers of each subspace and the partition of the training set are optimized iteratively and verified when the spaces decomposition and the quantizers are optimized jointly.
Polysemous Codes
TLDR
Polysemous codes are introduced, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance, and their design is inspired by algorithms introduced in the 90's to construct channel-optimized vector quantizers.
Approximate Search with Quantized Sparse Representations
TLDR
This paper proposes to approximate database vectors by constrained sparse coding, where possible atom weights are restricted to belong to a finite subset, thereby allowing us to index a large collection such as the BIGANN billion-sized benchmark.
Multiscale Quantization for Fast Similarity Search
TLDR
A multiscale quantization approach for fast similarity search on large, high-dimensional datasets where a separate scalar quantizer of the residual norm scales is learned in a stochastic gradient descent framework to minimize the overall quantization error.
...
...

References

SHOWING 1-10 OF 20 REFERENCES
Optimized Product Quantization
  • T. Ge, Kaiming He, Qifa Ke, Jian Sun
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2014
TLDR
This paper optimize PQ by minimizing quantization distortions w.r.t the space decomposition and the quantization codebooks, and evaluates the optimized product quantizers in three applications: compact encoding for exhaustive ranking, inverted multi-indexing for non-exhaustive search, and compacting image representations for image retrieval.
Iterative quantization: A procrustean approach to learning binary codes
TLDR
A simple and efficient alternating minimization scheme for finding a rotation of zero- centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube is proposed.
Improving the Fisher Kernel for Large-Scale Image Classification
TLDR
In an evaluation involving hundreds of thousands of training images, it is shown that classifiers learned on Flickr groups perform surprisingly well and that they can complement classifier learned on more carefully annotated datasets.
Cartesian K-Means
TLDR
New models with a compositional parameterization of cluster centers are developed, so representational capacity increases super-linearly in the number of parameters, allowing one to effectively quantize data using billions or trillions of centers.
Return of the Devil in the Details: Delving Deep into Convolutional Nets
TLDR
It is shown that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost, and it is identified that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance.
High-dimensional signature compression for large-scale image classification
TLDR
This work reports results on two large databases — ImageNet and a dataset of lM Flickr images — showing that it can reduce the storage of the authors' signatures by a factor 64 to 128 with little loss in accuracy and integrating the decompression in the classifier learning yields an efficient and scalable training algorithm.
Aggregating local descriptors into a compact image representation
TLDR
This work proposes a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation, and shows how to jointly optimize the dimension reduction and the indexing algorithm.
PiCoDes: Learning a Compact Code for Novel-Category Recognition
TLDR
PICODES: a very compact image descriptor which nevertheless allows high performance on object category recognition and an alternation scheme and convex upper bound which demonstrate excellent performance in practice are presented.
Visualizing and Understanding Convolutional Networks
TLDR
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration
TLDR
A system that answers the question, “What is the fastest approximate nearest-neighbor algorithm for my data?” and a new algorithm that applies priority search on hierarchical k-means trees, which is found to provide the best known performance on many datasets.
...
...