Stacked Quantizers for Compositional Vector Compression
@article{Martinez2014StackedQF, title={Stacked Quantizers for Compositional Vector Compression}, author={Julieta Martinez and Holger H. Hoos and J. Little}, journal={ArXiv}, year={2014}, volume={abs/1411.2173} }
Recently, Babenko and Lempitsky introduced Additive Quantization (AQ), a generalization of Product Quantization (PQ) where a non-independent set of codebooks is used to compress vectors into small binary codes. Unfortunately, under this scheme encoding cannot be done independently in each codebook, and optimal encoding is an NP-hard problem. In this paper, we observe that PQ and AQ are both compositional quantizers that lie on the extremes of the codebook dependence-independence assumption, and…
34 Citations
Beyond Product Quantization: Deep Progressive Quantization for Image Retrieval
- Computer ScienceIJCAI
- 2019
A deep progressive quantization (DPQ) model, as an alternative to PQ, for large scale image retrieval, that is trained once for different code lengths and therefore requires less computation time.
Stacked Product Quantization for Nearest Neighbor Search on Large Datasets
- Computer Science2016 IEEE Trustcom/BigDataSE/ISPA
- 2016
This paper proposes a new vector quantization method called SPQ which combines the strength of PQ and SQ and demonstrates that SPQ can generate codebook and encoding faster than SQ while maintain the same quantization error.
LSQ++: Lower Running Time and Higher Recall in Multi-codebook Quantization
- Computer ScienceECCV
- 2018
This work benchmarks a series of MCQ baselines on an equal footing and provides an analysis of their recall-vs-running-time performance, and observes that local search quantization (LSQ) is in practice much faster than its competitors, but is not the most accurate method in all cases.
Deep Recurrent Quantization for Generating Sequential Binary Codes
- Computer ScienceIJCAI
- 2019
This work proposes a Deep Recurrent Quantization (DRQ) architecture which can generate sequential binary codes and achieves comparable or even better performance compared with the state-of-the-art for image retrieval.
Autoregressive Image Generation using Residual Quantization
- Computer ScienceArXiv
- 2022
This study proposes the two-stage framework, which consists of Residual-Quantized VAE (RQ-VAE) and RQ-Transformer, to effectively generate high-resolution images and outperforms the existing AR models on various benchmarks of unconditional and conditional image generation.
3D Self-Attention for Unsupervised Video Quantization
- Computer ScienceSIGIR
- 2020
This paper makes a first attempt to combine quantization method with video retrieval called 3D-UVQ, which obtains high retrieval accuracy with low storage cost and demonstrates that the method significantly outperforms the state-of-the-arts.
Iteratively Multiple Projections Optimization for Product Quantization in Nearest Neighbor Search
- Computer Science2017 IEEE International Conference on Big Knowledge (ICBK)
- 2017
A novel distortion model is proposed which can jointly train the multiple projections and fine quantizers of each subspace and the partition of the training set are optimized iteratively and verified when the spaces decomposition and the quantizers are optimized jointly.
Polysemous Codes
- Computer ScienceECCV
- 2016
Polysemous codes are introduced, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance, and their design is inspired by algorithms introduced in the 90's to construct channel-optimized vector quantizers.
Approximate Search with Quantized Sparse Representations
- Computer ScienceECCV
- 2016
This paper proposes to approximate database vectors by constrained sparse coding, where possible atom weights are restricted to belong to a finite subset, thereby allowing us to index a large collection such as the BIGANN billion-sized benchmark.
References
SHOWING 1-10 OF 20 REFERENCES
Optimized Product Quantization
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2014
This paper optimize PQ by minimizing quantization distortions w.r.t the space decomposition and the quantization codebooks, and evaluates the optimized product quantizers in three applications: compact encoding for exhaustive ranking, inverted multi-indexing for non-exhaustive search, and compacting image representations for image retrieval.
Iterative quantization: A procrustean approach to learning binary codes
- Computer ScienceCVPR 2011
- 2011
A simple and efficient alternating minimization scheme for finding a rotation of zero- centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube is proposed.
Additive Quantization for Extreme Vector Compression
- Computer Science2014 IEEE Conference on Computer Vision and Pattern Recognition
- 2014
A new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks is introduced, which leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error.
Improving the Fisher Kernel for Large-Scale Image Classification
- Computer ScienceECCV
- 2010
In an evaluation involving hundreds of thousands of training images, it is shown that classifiers learned on Flickr groups perform surprisingly well and that they can complement classifier learned on more carefully annotated datasets.
Cartesian K-Means
- Computer Science2013 IEEE Conference on Computer Vision and Pattern Recognition
- 2013
New models with a compositional parameterization of cluster centers are developed, so representational capacity increases super-linearly in the number of parameters, allowing one to effectively quantize data using billions or trillions of centers.
Return of the Devil in the Details: Delving Deep into Convolutional Nets
- Computer ScienceBMVC
- 2014
It is shown that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost, and it is identified that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance.
High-dimensional signature compression for large-scale image classification
- Computer ScienceCVPR 2011
- 2011
This work reports results on two large databases — ImageNet and a dataset of lM Flickr images — showing that it can reduce the storage of the authors' signatures by a factor 64 to 128 with little loss in accuracy and integrating the decompression in the classifier learning yields an efficient and scalable training algorithm.
Aggregating local descriptors into a compact image representation
- Computer Science2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
- 2010
This work proposes a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation, and shows how to jointly optimize the dimension reduction and the indexing algorithm.
PiCoDes: Learning a Compact Code for Novel-Category Recognition
- Computer ScienceNIPS
- 2011
PICODES: a very compact image descriptor which nevertheless allows high performance on object category recognition and an alternation scheme and convex upper bound which demonstrate excellent performance in practice are presented.
Spectral Hashing
- Computer ScienceNIPS
- 2008
The problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard and a spectral method is obtained whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian.