• Corpus ID: 15610215

Compressed Learning : Universal Sparse Dimensionality Reduction and Learning in the Measurement Domain

@inproceedings{Calderbank2009CompressedL,
  title={Compressed Learning : Universal Sparse Dimensionality Reduction and Learning in the Measurement Domain},
  author={Robert Calderbank},
  year={2009}
}
In this paper, we provide theoretical results to show that compressed learning , learning directly in the compressed domain, is possible. In Particular, we provide tight bounds demonstrating that the linear kernel SVM’s classifier in the measurement domain, with high probability, has true accuracy close to the accuracy of the best linear threshold classifier in the data domain. We show that this is beneficial both from the compressed sensing and the machine learning points of view. Furthermore… 

Figures and Tables from this paper

Generalization Error Analysis of Quantized Compressive Learning

This paper considers the learning problem where the projected data is further compressed by scalar quantization, which is called quantized compressive learning, and shows that the inner product estimators have deep connection with NN and linear classification problem through the variance of their debiased counterparts.

2D compressed learning: support matrix machine with bilinear random projections

A 2D compressed learning paradigm to learn the SMM classifier in some compressed data domain is considered and it is shown that the Kronecker product measurement matrices used by KCS satisfies the restricted isometry property (RIP), which is a property to ensure the learnability of the compressed data.

From Affine Rank Minimization Solution to Sparse Modeling

This work tackles the problem of feature representation from the context of sparsity and affine rank minimization by leveraging compressed sensing from the learning perspective in order to provide answers to questions about sparse representation in signal processing and machine learning.

There and Back Again: A General Approach to Learning Sparse Models

If the original factors are sparse, then their projections are the sparsest solutions to the projected NMF problem, which explains why the method works for NMF and shows an interesting new property of random projections: they can preserve the solutions of non-convex optimization problems such as NMF.

Finding needles in compressed haystacks

  • A. CalderbankSina Jafarpour
  • Computer Science
    2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2012
Tight bounds are provided demonstrating that the linear kernel SVMs classifier in the measurement domain, with high probability, has true accuracy close to the accuracy of the best linear threshold classifiers in the data domain.

Compressed classification learning with Markov chain samples

ECE 543 Project Report : Learning in Compressed Spaces

This work investigated learning task in the compressed domain, induced by this random projection, which relies on the properties of the measurement matrix, such as preservation of relative geometry.

Compressed Factorization: Fast and Accurate Low-Rank Factorization of Compressively-Sensed Data

This work examines the approach of first performing factorization in the compressed domain, and then reconstructing the original high-dimensional factors from the recovered (compressed) factors, and establishes conditions under which this natural approach will provably recover the original factors.

Invertible Nonlinear Dimensionality Reduction via Joint Dictionary Learning

This paper proposes an invertible nonlinear dimensionality reduction method via jointly learning dictionaries in both the original high dimensional data space and its low dimensional representation space, which can outperform compressed sensing in task-driven learning problems, such as data visualization.

Compressed sensing and dimensionality reduction for unsupervised learning. (Échantillonnage compressé et réduction de dimension pour l'apprentissage non supervisé)

This thesis proposes a framework for estimating probability density mixture parameters in which the training data is compressed into a fixed-size representation, and suggests the existence of theoretical guarantees for reconstructing signals belonging to models beyond usual sparse models.
...

References

SHOWING 1-10 OF 36 REFERENCES

Random Projections for Manifold Learning

This work rigorously proves that with a small number M of random projections of sample points in ℝN belonging to an unknown K-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy.

Random Projections of Smooth Manifolds

Abstract We propose a new approach for nonadaptive dimensionality reduction of manifold-modeled data, demonstrating that a small number of random linear projections can preserve key information about

Combinatorial Algorithms for Compressed Sensing

The results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals.

Construction of a Large Class of Deterministic Sensing Matrices That Satisfy a Statistical Isometry Property

Simple criteria are provided that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of k-sparse signals; in particular, most such signals have a unique representation in the measurement domain.

Compressed Video Sensing

This work considers video streams and reconstruct volumes from a subset of time series, and shows that they are highly compressible in a wavelet basis, and concludes that compressed sensing is highly appropriate for representing video streams.

Sparse representations for image classification: learning discriminative and reconstructive non-parametric dictionaries

A framework for learning optimal dictionaries for simultaneous sparse signal representation and robust class classification is introduced, addressing for the first time the explicit incorporation of both reconstruction and discrimination terms in the non-parametric dictionary learning and sparse coding energy.

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.

A Generalized Restricted Isometry Property

A natural consequence of the existence of special measurement matrices which satisfy the so-called Restricted Isometry Property is described ‐ if a matrix satisfies RIP, then acute angles between sparse vectors are also approximately preserved.

A Simple Proof of the Restricted Isometry Property for Random Matrices

Abstract We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main