• Publications
  • Influence
ImageNet classification with deep convolutional neural networks
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, weExpand
  • 50,086
  • 7575
Learning Multiple Layers of Features from Tiny Images
Groups at MIT and NYU have collected a dataset of millions of tiny colour images from the web. It is, in principle, an excellent dataset for unsupervised training of deep generative models, butExpand
  • 8,591
  • 2852
Dropout: a simple way to prevent neural networks from overfitting
Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, makingExpand
  • 18,206
  • 1654
Improving neural networks by preventing co-adaptation of feature detectors
When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of theExpand
  • 4,699
  • 412
One weird trick for parallelizing convolutional neural networks
I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modernExpand
  • 548
  • 114
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural networkExpand
  • 906
  • 68
Convolutional Deep Belief Networks on CIFAR-10
We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images dataset. When training a convolutional DBN, one must decide what to do with the edge pixelsExpand
  • 319
  • 51
Transforming Auto-Encoders
The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision communityExpand
  • 517
  • 46
Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images
Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM)Expand
  • 223
  • 26
Using very deep autoencoders for content-based image retrieval
We show how to learn many layers of features on color images and we use these features to initialize deep autoencoders. We then use the autoencoders to map images to short binary codes. UsingExpand
  • 311
  • 19