Corpus ID: 54445464

Compressive Classification (Machine Learning without learning)

@article{Schellekens2018CompressiveC,
  title={Compressive Classification (Machine Learning without learning)},
  author={Vincent Schellekens and Laurent Jacques},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.01410}
}
Compressive learning is a framework where (so far unsupervised) learning tasks use not the entire dataset but a compressed summary (sketch) of it. We propose a compressive learning classification method, and a novel sketch function for images. 
Sketching Datasets for Large-Scale Learning (long version)
TLDR
The current state-of-the-art in sketched learning is surveyed, including the main concepts and algorithms, their connections with established signal-processing methods, existing theoretical guarantees-on both information preservation and privacy preservation, and important open problems. Expand

References

SHOWING 1-10 OF 20 REFERENCES
Compressive Statistical Learning with Random Feature Moments
TLDR
A general framework --compressive statistical learning-- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch that captures the information relevant to the considered learning task. Expand
Quantized Compressive K-Means
TLDR
The present work generalizes the CKM sketching procedure to a large class of periodic nonlinearities including hardware-friendly implementations that compressively acquire entire datasets. Expand
Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?
TLDR
It is formally proved that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data. Expand
Compressive K-means
TLDR
This work proposes a compressive version of K-means, that estimates cluster centers from a sketch, i.e. from a drastically compressed representation of the training dataset, and demonstrates empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of centroids times the ambient dimension, and independent of the size of the original dataset. Expand
A Hilbert Space Embedding for Distributions
We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel HilbertExpand
Sketching for large-scale learning of mixture models
TLDR
This work proposes a "compressive learning" framework where first sketch the data by computing random generalized moments of the underlying probability distribution, then estimate mixture model parameters from the sketch using an iterative algorithm analogous to greedy sparse signal recovery. Expand
Kernel Methods for Deep Learning
TLDR
A new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets are introduced that can be used in shallow architectures, such as support vector machines (SVMs), or in deep kernel-based architectures that the authors call multilayers kernel machines (MKMs). Expand
Deep Image Prior
TLDR
It is shown that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Expand
Pattern classification and scene analysis
  • R. Duda, P. Hart
  • Computer Science, Mathematics
  • A Wiley-Interscience publication
  • 1973
TLDR
The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis. Expand
Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing
TLDR
This paper proposes to fix almost all layers of a deep convolutional neural network, allowing only a small portion of the weights to be learned, and suggests practical ways to harness it to create more robust and compact representations. Expand
...
1
2
...