• Corpus ID: 16202976

Adaptive Training of Random Mapping for Data Quantization

@article{Cheng2016AdaptiveTO,
  title={Adaptive Training of Random Mapping for Data Quantization},
  author={Miao Cheng and Ah Chung Tsoi},
  journal={ArXiv},
  year={2016},
  volume={abs/1606.08808}
}
Data quantization learns encoding results of data with certain requirements, and provides a broad perspective of many real-world applications to data handling. Nevertheless, the results of encoder is usually limited to multivariate inputs with the random mapping, and side information of binary codes are hardly to mostly depict the original data patterns as possible. In the literature, cosine based random quantization has attracted much attentions due to its intrinsic bounded results… 

Figures from this paper

References

SHOWING 1-10 OF 21 REFERENCES

Learning Multi-View Neighborhood Preserving Projections

We address the problem of metric learning for multi-view data, namely the construction of embedding projections from data in different representations into a shared feature space, such that the

Iterative quantization: A procrustean approach to learning binary codes

A simple and efficient alternating minimization scheme for finding a rotation of zero- centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube is proposed.

Nonnegative class-specific entropy component analysis with adaptive step search criterion

A novel nonnegative learning method, termed nonnegative class-specific entropy component analysis, is developed in this work to exploit the informative components hidden in nonnegative patterns, and possesses better performance over other methods.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Locality-sensitive binary codes from shift-invariant kernels

This paper introduces a simple distribution-free encoding scheme based on random projections, such that the expected Hamming distance between the binary codes of two vectors is related to the value of a shift-invariant kernel between the vectors.

Nonnegative matrix factorization with constrained second-order optimization

Product Quantization for Nearest Neighbor Search

This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to

Locality-sensitive hashing scheme based on p-stable distributions

A novel Locality-Sensitive Hashing scheme for the Approximate Nearest Neighbor Problem under lp norm, based on p-stable distributions that improves the running time of the earlier algorithm and yields the first known provably efficient approximate NN algorithm for the case p<1.

Randomized Nonlinear Component Analysis

This paper leverages randomness to design scalable new variants of nonlinear PCA and CCA and extends to key multivariate analysis tools such as spectral clustering or LDA.

Gradient-based learning applied to document recognition

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.