Corpus ID: 219981351

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures

@article{Launay2020DirectFA,
  title={Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures},
  author={Julien Launay and Iacopo Poli and Franccois Boniface and Florent Krzakala},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.12878}
}
  • Julien Launay, Iacopo Poli, +1 author Florent Krzakala
  • Published 2020
  • Mathematics, Computer Science
  • ArXiv
  • Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 86 REFERENCES

    Adam: A Method for Stochastic Optimization

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Adaptive Factorization Network: Learning Adaptive-Order Feature Interactions

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Attention is All you Need

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    A logarithmic neural network architecture for unbounded non-linear function approximation

    • J. Wesley Hines
    • Computer Science
    • Proceedings of International Conference on Neural Networks (ICNN'96)
    • 1996
    VIEW 1 EXCERPT

    A new model for learning in graph domains

    Ad click prediction: a view from the trenches

    VIEW 1 EXCERPT