Corpus ID: 8489617

Understanding Synthetic Gradients and Decoupled Neural Interfaces

@inproceedings{Czarnecki2017UnderstandingSG,
  title={Understanding Synthetic Gradients and Decoupled Neural Interfaces},
  author={Wojciech Czarnecki and G. Swirszcz and Max Jaderberg and Simon Osindero and Oriol Vinyals and K. Kavukcuoglu},
  booktitle={ICML},
  year={2017}
}
When training neural networks, the use of Synthetic Gradients (SG) allows layers or modules to be trained without update locking - without waiting for a true error gradient to be backpropagated - resulting in Decoupled Neural Interfaces (DNIs). This unlocked ability of being able to update parts of a neural network asynchronously and with only local information was demonstrated to work empirically in Jaderberg et al (2016). However, there has been very little demonstration of what changes DNIs… Expand
35 Citations
Decoupled Neural Interfaces using Synthetic Gradients
  • 199
  • PDF
Material for Decoupled Neural Interfaces using Synthetic Gradients
  • 2017
  • 1
  • PDF
Benchmarking Decoupled Neural Interfaces with Synthetic Gradients
  • PDF
Decoupled Greedy Learning of CNNs
  • 27
  • PDF
Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks
  • 1
  • Highly Influenced
  • PDF
Learning to solve the credit assignment problem
  • 14
  • Highly Influenced
  • PDF
Fast & Slow Learning: Incorporating Synthetic Gradients in Neural Memory Controllers
  • PDF
Deep Supervised Learning Using Local Errors
  • 38
  • PDF
Decoupled Parallel Backpropagation with Convergence Guarantee
  • 34
  • PDF
...
1
2
3
4
...

References

SHOWING 1-10 OF 20 REFERENCES
Decoupled Neural Interfaces using Synthetic Gradients
  • 199
  • PDF
Towards Biologically Plausible Deep Learning
  • 215
  • PDF
Toward an Integration of Deep Learning and Neuroscience
  • 327
  • PDF
Learning in the machine: Random backpropagation and the deep learning channel
  • 34
  • PDF
Direct Feedback Alignment Provides Learning in Deep Neural Networks
  • 164
  • PDF
Learning in the Machine: Random Backpropagation and the Learning Channel
  • 15
  • Highly Influential
  • PDF
Adam: A Method for Stochastic Optimization
  • 61,209
  • PDF
Random synaptic feedback weights support error backpropagation for deep learning
  • 356
  • Highly Influential
  • PDF
Understanding intermediate layers using linear classifier probes
  • 226
  • PDF
...
1
2
...