Higher Order Statistical Decorrelation without Information Loss
@inproceedings{Deco1994HigherOS, title={Higher Order Statistical Decorrelation without Information Loss}, author={G. Deco and W. Brauer}, booktitle={NIPS}, year={1994} }
A neural network learning paradigm based on information theory is proposed as a way to perform in an unsupervised fashion, redundancy reduction among the elements of the output layer without loss of information from the sensory input. The model developed performs nonlinear decorrelation up to higher orders of the cumulant tensors and results in probabilistically independent components of the output layer. This means that we don't need to assume Gaussian distribution neither at the input nor at… CONTINUE READING
32 Citations
Nonparametric Data Selection for Improvement of Parametric Neural Learning: A Cumulant-Surrogate Method
- Mathematics, Computer Science
- ICANN
- 1996
Improving Variational Autoencoders with Inverse Autoregressive Flow
- Computer Science, Mathematics
- NIPS
- 2016
- 8
- PDF
The Reversible Residual Network: Backpropagation Without Storing Activations
- Computer Science
- NIPS
- 2017
- 185
- PDF
General Probabilistic Surface Optimization and Log Density Estimation
- Computer Science, Mathematics
- ArXiv
- 2019
- 2
- PDF
Improved Variational Inference with Inverse Autoregressive Flow
- Mathematics, Computer Science
- NIPS 2016
- 2017
- 839
- PDF
A Survey of Unsupervised Deep Domain Adaptation
- Computer Science, Mathematics
- ACM Trans. Intell. Syst. Technol.
- 2020
- 54
- PDF
References
SHOWING 1-3 OF 3 REFERENCES
Supervised Factorial Learning
- Mathematics, Computer Science
- Neural Computation
- 1993
- 33
- Highly Influential
Unsupervised Learning. Neural Computation, 1,295-311
- A. Papoulis
- 1989