Share This Author
Cytoscape: a software environment for integrated models of biomolecular interaction networks.
Several case studies of Cytoscape plug-ins are surveyed, including a search for interaction pathways correlating with changes in gene expression, a study of protein complexes involved in cellular recovery to DNA damage, inference of a combined physical/functional interaction network for Halobacterium, and an interface to detailed stochastic/kinetic gene regulatory models.
Communication-Efficient Learning of Deep Networks from Decentralized Data
- H. B. McMahan, Eider Moore, D. Ramage, S. Hampson, B. A. Y. Arcas
- Computer ScienceAISTATS
- 17 February 2016
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora
Labeled LDA is introduced, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one correspondence between LDA's latent topics and user tags that allows Labeled LDA to directly learn word-tag correspondences.
Advances and Open Problems in Federated Learning
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
Practical Secure Aggregation for Privacy-Preserving Machine Learning
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.
Towards Federated Learning at Scale: System Design
A scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow is built, describing the resulting high-level design, and sketch some of the challenges and their solutions.
Learning Differentially Private Recurrent Language Models
This work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent and adds user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user- level data.
Federated Learning of Deep Networks using Model Averaging
This work presents a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise, and allows high-quality models to be trained in relatively few rounds of communication.
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number…
Federated Learning for Mobile Keyboard Prediction
The federation algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall and the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers are demonstrated.