Corpus ID: 220042269

Towards Understanding Hierarchical Learning: Benefits of Neural Representations

@article{Chen2020TowardsUH,
  title={Towards Understanding Hierarchical Learning: Benefits of Neural Representations},
  author={Minshuo Chen and Yu Bai and Jason D. Lee and Tuo Zhao and Hao-rui Wang and Caiming Xiong and Richard Socher},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.13436}
}
  • Minshuo Chen, Yu Bai, +4 authors Richard Socher
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to “shallow learners” such as kernels. In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 57 REFERENCES

    Neural Tangent Kernel: Convergence and Generalization in Neural Networks

    VIEW 23 EXCERPTS
    HIGHLY INFLUENTIAL

    Linearized two-layers neural networks in high dimension

    VIEW 3 EXCERPTS
    HIGHLY INFLUENTIAL

    Agnostically learning halfspaces

    An Average-Case Depth Hierarchy Theorem for Boolean Circuits