Learning multiple layers of representation

@article{Hinton2007LearningML,
  title={Learning multiple layers of representation},
  author={Geoffrey E. Hinton},
  journal={Trends in Cognitive Sciences},
  year={2007},
  volume={11},
  pages={428-434}
}
  • Geoffrey E. Hinton
  • Published 2007
  • Psychology, Medicine
  • Trends in Cognitive Sciences
  • To achieve its impressive performance in tasks such as speech perception or object recognition, the brain extracts multiple levels of representation from the sensory input. Backpropagation was the first computationally efficient model of how neural networks could learn multiple layers of representation, but it required labeled training data and it did not work well in deep networks. The limitations of backpropagation learning can now be overcome by using multilayer neural networks that contain… CONTINUE READING

    Figures and Topics from this paper.

    Learning Representations from Deep Networks Using Mode Synthesizers
    • 1
    • PDF
    Modeling language and cognition with deep unsupervised learning: a tutorial overview
    • 52
    • Highly Influenced
    • PDF
    Learning Deep Visual Representations
    • 3
    • PDF
    Vector LIDA
    • 6
    • PDF
    Contributions to Deep Learning Models
    A review on advances in deep learning
    • 25
    Deep learning models of biological visual information processing
    • 2
    • Highly Influenced
    • PDF
    Learning Paired-Associate Images with an Unsupervised Deep Learning Architecture

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 61 REFERENCES