# Scoring and Classifying with Gated Auto-Encoders

@inproceedings{Im2015ScoringAC, title={Scoring and Classifying with Gated Auto-Encoders}, author={D. Im and Graham W. Taylor}, booktitle={ECML/PKDD}, year={2015} }

#### 2 Citations

Modeling Musical Structure with Artificial Neural Networks

- Computer Science, Engineering
- ArXiv
- 2020

This thesis explores the application of ANNs to different aspects of musical structure modeling, identify some challenges involved and propose strategies to address them, and motivates the relevance of musical transformations in structure modeling and shows how a connectionist model, the Gated Autoencoder, can be employed to learn transformations between musical fragments. Expand

Gate-Layer Autoencoders with Application to Incomplete EEG Signal Recovery

- Computer Science
- 2019 International Joint Conference on Neural Networks (IJCNN)
- 2019

This paper proposes a new AE architecture: Gate-Layer AE (GLAE), which affords it with an inherent ability to recover missing variables from the available ones and to act as a concurrent multi-function approximator. Expand

#### References

SHOWING 1-10 OF 31 REFERENCES

On autoencoder scoring

- Computer Science
- ICML
- 2013

This paper shows how an autoencoder can assign meaningful scores to data independently of training procedure and without reference to any probabilistic model, by interpreting it as a dynamical system and how one can combine multiple, unnormalized scores into a generative classifier. Expand

Gated Autoencoders with Tied Input Weights

- Mathematics
- ICML 2013
- 2013

The semantic interpretation of images is one of the core applications of deep learning. Several techniques have been recently proposed to model the relation between two images, with application toâ€¦ Expand

A Connection Between Score Matching and Denoising Autoencoders

- Mathematics, Computer Science
- Neural Computation
- 2011

A proper probabilistic model for the denoising autoencoder technique is defined, which makes it in principle possible to sample from them or rank examples by their energy, and a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives is suggested. Expand

What regularized auto-encoders learn from the data-generating distribution

- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2014

It is shown that the auto-encoder captures the score (derivative of the log-density with respect to the input) and contradicts previous interpretations of reconstruction error as an energy function. Expand

Extracting and composing robust features with denoising autoencoders

- Mathematics, Computer Science
- ICML '08
- 2008

This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. Expand

Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

- Mathematics, Computer Science
- ICML
- 2011

It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Expand

The Potential Energy of an Autoencoder

- Computer Science, Medicine
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- 2015

It is shown how most common autoencoders are naturally associated with an energy function, independent of the training procedure, and that the energy landscape can be inferred analytically by integrating the reconstruction function of the autoencoder. Expand

Conditional Restricted Boltzmann Machines for Structured Output Prediction

- Computer Science, Mathematics
- UAI
- 2011

This work argues that standard Contrastive Divergence-based learning may not be suitable for training CRBMs, and proposes an improved learning algorithm for two distinct types of structured output prediction problems and shows that the new learning algorithms can work much better than Contrastives Divergence on both types of problems. Expand

Deep Generative Stochastic Networks Trainable by Backprop

- Mathematics, Computer Science
- ICML
- 2014

Theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders are provided and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood. Expand

Gated Softmax Classification

- Computer Science, Mathematics
- NIPS
- 2010

A fully probabilistic model that computes class probabilities by combining an input vector multiplicatively with a vector of binary latent variables is described, and it is shown that this model can achieve classification performance that is competitive with (kernel) SVMs, backpropagation, and deep belief nets. Expand