Corpus ID: 18030110

Higher Order Statistical Decorrelation without Information Loss

@inproceedings{Deco1994HigherOS,
  title={Higher Order Statistical Decorrelation without Information Loss},
  author={G. Deco and W. Brauer},
  booktitle={NIPS},
  year={1994}
}
  • G. Deco, W. Brauer
  • Published in NIPS 1994
  • Mathematics, Computer Science
  • A neural network learning paradigm based on information theory is proposed as a way to perform in an unsupervised fashion, redundancy reduction among the elements of the output layer without loss of information from the sensory input. The model developed performs nonlinear decorrelation up to higher orders of the cumulant tensors and results in probabilistically independent components of the output layer. This means that we don't need to assume Gaussian distribution neither at the input nor at… CONTINUE READING
    32 Citations
    Improving Variational Autoencoders with Inverse Autoregressive Flow
    • 8
    • PDF
    Learning Bijective Feature Maps for Linear ICA
    • PDF
    Integer Discrete Flows and Lossless Compression
    • 34
    • PDF
    The Reversible Residual Network: Backpropagation Without Storing Activations
    • 185
    • PDF
    General Probabilistic Surface Optimization and Log Density Estimation
    • 2
    • PDF
    Improved Variational Inference with Inverse Autoregressive Flow
    • 839
    • PDF
    Compression with Flows via Local Bits-Back Coding
    • 8
    • PDF
    A Survey of Unsupervised Deep Domain Adaptation
    • 54
    • PDF

    References

    SHOWING 1-3 OF 3 REFERENCES
    Supervised Factorial Learning
    • A. Redlich
    • Mathematics, Computer Science
    • Neural Computation
    • 1993
    • 33
    • Highly Influential
    Unsupervised Learning. Neural Computation, 1,295-311
    • A. Papoulis
    • 1989