Learning multiple layers of representation

  title={Learning multiple layers of representation},
  author={Geoffrey E. Hinton},
  journal={Trends in Cognitive Sciences},
  • Geoffrey E. Hinton
  • Published 2007
  • Psychology, Medicine
  • Trends in Cognitive Sciences
  • To achieve its impressive performance in tasks such as speech perception or object recognition, the brain extracts multiple levels of representation from the sensory input. Backpropagation was the first computationally efficient model of how neural networks could learn multiple layers of representation, but it required labeled training data and it did not work well in deep networks. The limitations of backpropagation learning can now be overcome by using multilayer neural networks that contain… CONTINUE READING

    Figures and Topics from this paper.

    Learning Representations from Deep Networks Using Mode Synthesizers
    • 1
    • PDF
    Modeling language and cognition with deep unsupervised learning: a tutorial overview
    • 52
    • Highly Influenced
    • PDF
    Learning Deep Visual Representations
    • 3
    • PDF
    Vector LIDA
    • 6
    • PDF
    Contributions to Deep Learning Models
    A review on advances in deep learning
    • 25
    Deep learning models of biological visual information processing
    • 2
    • Highly Influenced
    • PDF
    Learning Paired-Associate Images with an Unsupervised Deep Learning Architecture


    Publications referenced by this paper.