• Publications
  • Influence
ImageNet classification with deep convolutional neural networks
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. Expand
Dropout: a simple way to prevent neural networks from overfitting
Dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets. Expand
A Fast Learning Algorithm for Deep Belief Nets
We show how to use complementary priors to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Expand
Reducing the Dimensionality of Data with Neural Networks
An effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data. Expand
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Expand
Learning internal representations by error propagation
This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion
Rectified Linear Units Improve Restricted Boltzmann Machines
Restricted Boltzmann machines were developed using binary stochastic hidden units. Expand
Visualizing Data using t-SNE
We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of StochasticExpand
Learning representations by back-propagating errors
We describe a new learning procedure, back-propagation, for networks of neurone-like units. Expand
Dynamic Routing Between Capsules
We show that a multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net. Expand