Corpus ID: 18030110

Higher Order Statistical Decorrelation without Information Loss

@inproceedings{Deco1994HigherOS,
  title={Higher Order Statistical Decorrelation without Information Loss},
  author={G. Deco and W. Brauer},
  booktitle={NIPS},
  year={1994}
}
A neural network learning paradigm based on information theory is proposed as a way to perform in an unsupervised fashion, redundancy reduction among the elements of the output layer without loss of information from the sensory input. The model developed performs nonlinear decorrelation up to higher orders of the cumulant tensors and results in probabilistically independent components of the output layer. This means that we don't need to assume Gaussian distribution neither at the input nor at… Expand
Nonparametric Data Selection for Improvement of Parametric Neural Learning: A Cumulant-Surrogate Method
TLDR
A nonparametric cumulant based statistical approach for detecting linear and nonlinear statistical dependences in non-stationary time series and measuring the predictability which tests the null hypothesis of statistical independence by the surrogate method is introduced. Expand
Improving Variational Autoencoders with Inverse Autoregressive Flow
TLDR
In experiments with natural images, it is demonstrated that autoregressive flow leads to significant performance gains and is well applicable to models with high-dimensional latent spaces, such as convolutional generative models. Expand
Integer Discrete Flows and Lossless Compression
TLDR
This work introduces a flow-based generative model for ordinal discrete data called Integer Discrete Flow (IDF): a bijective integer map that can learn rich transformations on high-dimensional data and introduces a flexible transformation layer called integer discrete coupling. Expand
The Reversible Residual Network: Backpropagation Without Storing Activations
TLDR
The Reversible Residual Network (RevNet) is presented, a variant of ResNets where each layer's activations can be reconstructed exactly from the next layer's, therefore, the activations for most layers need not be stored in memory during backpropagation. Expand
General Probabilistic Surface Optimization and Log Density Estimation
TLDR
A novel algorithm family, which generalizes many unsupervised techniques including unnormalized and energy models, and allows us to infer different statistical modalities from data samples, and derives new PSO-based inference methods as demonstration of PSO exceptional usability. Expand
Improving Variational Autoencoders with Inverse Autoregressive Flow
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverseExpand
Improving Variational Autoencoders with Inverse Autoregressive Flow
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverseExpand
Improved Variational Inference with Inverse Autoregressive Flow
TLDR
A new type of normalizing flow, inverse autoregressive flow (IAF), is proposed that, in contrast to earlier published flows, scales well to high-dimensional latent spaces and significantly improves upon diagonal Gaussian approximate posteriors. Expand
Compression with Flows via Local Bits-Back Coding
TLDR
This work introduces local bits-back coding, a new compression technique for flow models and presents efficient algorithms that instantiate the technique for many popular types of flows, and demonstrates that the algorithms closely achieve theoretical codelengths for state-of-the-art flow models on high-dimensional data. Expand
Convex Smoothed Autoencoder-Optimal Transport model
TLDR
A new generative model capable of generating samples which resemble the observed data, and is free from mode collapse and mode mixture is developed, inspired by the recently proposed Autoencoder-Optimal Transport (AE-OT) model. Expand
...
1
2
3
4
...

References

SHOWING 1-3 OF 3 REFERENCES
Supervised Factorial Learning
  • A. Redlich
  • Mathematics, Computer Science
  • Neural Computation
  • 1993
TLDR
This work lends support to Barlow's argument for factorial sensory processing, by demonstrating how it can solve actual pattern recognition problems, and two techniques for supervised factorial learning are explored, one of which gives a novel distributed solution requiring only positive examples. Expand
Unsupervised Learning. Neural Computation, 1,295-311
  • A. Papoulis
  • 1989