Corpus ID: 16861557

Federated Learning of Deep Networks using Model Averaging

@article{McMahan2016FederatedLO,
  title={Federated Learning of Deep Networks using Model Averaging},
  author={H. B. McMahan and Eider Moore and D. Ramage and B. A. Y. Arcas},
  journal={ArXiv},
  year={2016},
  volume={abs/1602.05629}
}
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. [...] Key Method We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise.Expand
271 Citations
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
  • 401
  • Highly Influenced
  • PDF
Federated Learning: Strategies for Improving Communication Efficiency
  • 1,172
  • PDF
Network Update Compression for Federated Learning
Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning
  • PDF
Evaluating the Communication Efficiency in Federated Learning Algorithms
  • 4
  • Highly Influenced
  • PDF
Decentralized Deep Learning with Arbitrary Communication Compression
  • 49
  • PDF
Accelerating DNN Training in Wireless Federated Edge Learning Systems
  • 34
  • PDF
Crowdlearning: Crowded Deep Learning with Data Privacy
  • 4
  • PDF
Distributed generation of privacy preserving data with user customization
  • 6
  • PDF
Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints
  • 44
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 41 REFERENCES
Privacy-preserving deep learning
  • R. Shokri, Vitaly Shmatikov
  • Computer Science
  • 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2015
  • 893
  • PDF
Large Scale Distributed Deep Networks
  • 2,566
  • PDF
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
  • 21,924
  • PDF
Dropout: a simple way to prevent neural networks from overfitting
  • 21,726
  • Highly Influential
  • PDF
Distributed Learning, Communication Complexity and Privacy
  • 159
  • PDF
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
  • 811
  • PDF
ImageNet classification with deep convolutional neural networks
  • 60,860
  • PDF
Adam: A Method for Stochastic Optimization
  • 60,738
  • PDF
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
  • 6,625
  • PDF
Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss
  • Yuchen Zhang, Lin Xiao
  • Mathematics, Computer Science
  • ArXiv
  • 2015
  • 46
  • PDF
...
1
2
3
4
5
...