Corpus ID: 10467483

Mixing Complexity and its Applications to Neural Networks

@article{Moshkovitz2017MixingCA,
  title={Mixing Complexity and its Applications to Neural Networks},
  author={M. Moshkovitz and Naftali Tishby},
  journal={ArXiv},
  year={2017},
  volume={abs/1703.00729}
}
  • M. Moshkovitz, Naftali Tishby
  • Published 2017
  • Computer Science
  • ArXiv
  • We suggest analyzing neural networks through the prism of space constraints. We observe that most training algorithms applied in practice use bounded memory, which enables us to use a new notion introduced in the study of space-time tradeoffs that we call mixing complexity. This notion was devised in order to measure the (in)ability to learn using a bounded-memory algorithm. In this paper we describe how we use mixing complexity to obtain new results on what can and cannot be learned using… CONTINUE READING

    Figures and Topics from this paper.

    Opening the Black Box of Deep Neural Networks via Information
    • 603
    • PDF
    A General Memory-Bounded Learning Algorithm
    • 2
    • PDF
    Approximating Continuous Functions by ReLU Nets of Minimal Width
    • 73
    • PDF
    Time-space lower bounds for two-pass learning
    • 5
    • PDF
    Nested Learning For Multi-Granular Tasks
    Extractor-based time-space lower bounds for learning
    • 26
    • PDF

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 36 REFERENCES
    On the Computational Efficiency of Training Neural Networks
    • 285
    • PDF
    On the Expressive Power of Deep Neural Networks
    • 316
    • PDF
    Optimal Architectures in a Solvable Model of Deep Networks
    • 20
    • PDF
    Training a 3-node neural network is NP-complete
    • 720
    • PDF
    Distribution-Specific Hardness of Learning Neural Networks
    • 78
    • PDF
    A theory of the learnable
    • 3,360
    • PDF
    Understanding deep learning requires rethinking generalization
    • 2,101
    • Highly Influential
    • PDF
    Deep learning and the information bottleneck principle
    • 499
    • PDF
    Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
    • 158
    • PDF
    The Power of Depth for Feedforward Neural Networks
    • 388
    • PDF