Corpus ID: 211082818

Compressive Learning of Generative Networks

  title={Compressive Learning of Generative Networks},
  author={Vincent Schellekens and Laurent Jacques},
Generative networks implicitly approximate complex densities from their sampling with impressive accuracy. However, because of the enormous scale of modern datasets, this training process is often computationally expensive. We cast generative network training into the recent framework of compressive learning: we reduce the computational burden of large-scale datasets by first harshly compressing them in a single pass as a single sketch vector. We then propose a cost function, which approximates… Expand


Generative Moment Matching Networks
This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples. Expand
Sketching for large-scale learning of mixture models
This work proposes a "compressive learning" framework where first sketch the data by computing random generalized moments of the underlying probability distribution, then estimate mixture model parameters from the sketch using an iterative algorithm analogous to greedy sparse signal recovery. Expand
Compressed Sensing using Generative Models
This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee. Expand
Compressive Statistical Learning with Random Feature Moments
A general framework --compressive statistical learning-- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch that captures the information relevant to the considered learning task. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
One Network to Solve Them All — Solving Linear Inverse Problems Using Deep Projection Models
This work proposes a general framework to train a single deep neural network that solves arbitrary linear inverse problems and demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting. Expand
Training generative neural networks via Maximum Mean Discrepancy optimization
This work considers training a deep neural network to generate samples from an unknown distribution given i.i.d. data to frame learning as an optimization minimizing a two-sample test statistic, and proves bounds on the generalization error incurred by optimizing the empirical MMD. Expand
Deep Generative Adversarial Networks for Compressed Sensing Automates MRI
A novel CS framework that permeates benefits from generative adversarial networks (GAN) to train a (low-dimensional) manifold of diagnostic-quality MR images from historical patients, which offers reconstruction under a few milliseconds, two orders of magnitude faster than state-of-the-art CS-MRI schemes. Expand
Wasserstein GAN
The problem this paper is concerned with is that of unsupervised learning, what does it mean to learn a probability distribution and how to define a parametric family of densities. Expand
Random Features for Large-Scale Kernel Machines
Two sets of random features are explored, provided convergence bounds on their ability to approximate various radial basis kernels, and it is shown that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large- scale kernel machines. Expand