#### Filter Results:

- Full text PDF available (238)

#### Publication Year

1976

2017

- This year (4)
- Last 5 years (46)
- Last 10 years (125)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov
- Journal of Machine Learning Research
- 2014

Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing thisā¦ (More)

- G E Hinton, R R Salakhutdinov
- Science
- 2006

High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe anā¦ (More)

- Geoffrey E. Hinton, Simon Osindero, Yee Whye Teh
- Neural Computation
- 2006

We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirectedā¦ (More)

We present a new technique called " t-SNE " that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowdā¦ (More)

- Vinod Nair, Geoffrey E. Hinton
- ICML
- 2010

Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these " Stepped Sig-moid Units " are unchanged. They can be approximatedā¦ (More)

When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This " overfitting " is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several otherā¦ (More)

- Alex Graves, Abdel-rahman Mohamed, Geoffrey E. Hinton
- 2013 IEEE International Conference on Acousticsā¦
- 2013

Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has provedā¦ (More)

- Geoffrey E. Hinton
- Neural Computation
- 2002

It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual "expert" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination ruleā¦ (More)

- Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
- NIPS
- 2012

We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which hasā¦ (More)

- Geoffrey E. Hinton
- Neural Networks: Tricks of the Trade
- 2012