# Compressive Classification (Machine Learning without learning)

@article{Schellekens2018CompressiveC, title={Compressive Classification (Machine Learning without learning)}, author={Vincent Schellekens and Laurent Jacques}, journal={ArXiv}, year={2018}, volume={abs/1812.01410} }

Compressive learning is a framework where (so far unsupervised) learning tasks use not the entire dataset but a compressed summary (sketch) of it. We propose a compressive learning classification method, and a novel sketch function for images.

#### One Citation

Sketching Datasets for Large-Scale Learning (long version)

- Mathematics, Computer Science
- ArXiv
- 2020

The current state-of-the-art in sketched learning is surveyed, including the main concepts and algorithms, their connections with established signal-processing methods, existing theoretical guarantees-on both information preservation and privacy preservation, and important open problems. Expand

#### References

SHOWING 1-10 OF 20 REFERENCES

Compressive Statistical Learning with Random Feature Moments

- Computer Science, Mathematics
- Mathematical Statistics and Learning
- 2021

A general framework --compressive statistical learning-- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch that captures the information relevant to the considered learning task. Expand

Quantized Compressive K-Means

- Computer Science, Mathematics
- IEEE Signal Processing Letters
- 2018

The present work generalizes the CKM sketching procedure to a large class of periodic nonlinearities including hardware-friendly implementations that compressively acquire entire datasets. Expand

Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?

- Computer Science, Mathematics
- IEEE Transactions on Signal Processing
- 2016

It is formally proved that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data. Expand

Compressive K-means

- Computer Science, Mathematics
- 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2017

This work proposes a compressive version of K-means, that estimates cluster centers from a sketch, i.e. from a drastically compressed representation of the training dataset, and demonstrates empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of centroids times the ambient dimension, and independent of the size of the original dataset. Expand

A Hilbert Space Embedding for Distributions

- Mathematics, Computer Science
- ALT
- 2007

We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert… Expand

Sketching for large-scale learning of mixture models

- Computer Science, Mathematics
- 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2016

This work proposes a "compressive learning" framework where first sketch the data by computing random generalized moments of the underlying probability distribution, then estimate mixture model parameters from the sketch using an iterative algorithm analogous to greedy sparse signal recovery. Expand

Kernel Methods for Deep Learning

- Computer Science
- NIPS
- 2009

A new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets are introduced that can be used in shallow architectures, such as support vector machines (SVMs), or in deep kernel-based architectures that the authors call multilayers kernel machines (MKMs). Expand

Deep Image Prior

- Computer Science, Mathematics
- 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018

It is shown that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Expand

Pattern classification and scene analysis

- Computer Science, Mathematics
- A Wiley-Interscience publication
- 1973

The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis. Expand

Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing

- Computer Science
- 2019 16th Conference on Computer and Robot Vision (CRV)
- 2019

This paper proposes to fix almost all layers of a deep convolutional neural network, allowing only a small portion of the weights to be learned, and suggests practical ways to harness it to create more robust and compact representations. Expand