• Corpus ID: 46933259

Holographic Neural Architectures

@article{Daouda2018HolographicNA,
  title={Holographic Neural Architectures},
  author={Tariq Daouda and Jeremie Zumer and Claude Perreault and S{\'e}bastien Lemieux},
  journal={ArXiv},
  year={2018},
  volume={abs/1806.00931}
}
Representation learning is at the heart of what makes deep learning effective. In this work, we introduce a new framework for representation learning that we call "Holographic Neural Architectures" (HNAs). In the same way that an observer can experience the 3D structure of a holographed object by looking at its hologram from several angles, HNAs derive Holographic Representations from the training set. These representations can then be explored by moving along a continuous bounded single… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 27 REFERENCES
Variational Lossy Autoencoder
TLDR
This paper presents a simple but principled method to learn global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN with greatly improve generative modeling performance of VAEs.
Representation Learning: A Review and New Perspectives
TLDR
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Understanding the difficulty of training deep feedforward neural networks
TLDR
The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.
Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives
TLDR
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, manifold learning, anddeep learning.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Rectifier Nonlinearities Improve Neural Network Acoustic Models
TLDR
This work explores the use of deep rectifier networks as acoustic models for the 300 hour Switchboard conversational speech recognition task, and analyzes hidden layer representations to quantify differences in how ReL units encode inputs as compared to sigmoidal units.
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
TLDR
This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.
A Neural Probabilistic Language Model
TLDR
This work proposes to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences.
Reading Digits in Natural Images with Unsupervised Feature Learning
TLDR
A new benchmark dataset for research use is introduced containing over 600,000 labeled digits cropped from Street View images, and variants of two recently proposed unsupervised feature learning methods are employed, finding that they are convincingly superior on benchmarks.
Gapped sequence alignment using artificial neural networks: application to the MHC class I system
TLDR
It is shown that prediction methods based on alignments that include insertions and deletions have significantly higher performance than methods trained on peptides of single lengths and that the NetMHC-4.0 method can learn the length profile of different MHC molecules.
...
...