Asynchronous Multi-task Learning
@article{Baytas2016AsynchronousML, title={Asynchronous Multi-task Learning}, author={Inci M. Baytas and Ming Yan and Anil K. Jain and Jiayu Zhou}, journal={2016 IEEE 16th International Conference on Data Mining (ICDM)}, year={2016}, pages={11-20} }
Many real-world machine learning applications involveseveral learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the…
Figures and Tables from this paper
34 Citations
Distributed Multi-Task Relationship Learning
- Computer ScienceKDD
- 2017
This paper proposes a distributed multi-task learning framework that simultaneously learns predictive models for each task as well as task relationships between tasks alternatingly in the parameter server paradigm and proposes a communication-efficient primal-dual distributed optimization algorithm to solve theDual problem by carefully designing local subproblems to make the dual problem decomposable.
Private Multi-Task Learning: Formulation and Applications to Federated Learning
- Computer ScienceArXiv
- 2021
This work formalizes notions of client-level privacy for MTL via joint differential privacy (JDP), a relaxation of differential privacy for mechanism design and distributed optimization, and proposes an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning, subject to JDP.
Model-Protected Multi-Task Learning
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2022
A privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix, which can be guaranteed not to underperform compared with STL methods.
Privacy-Preserving Distributed Multi-Task Learning with Asynchronous Updates
- Computer ScienceKDD
- 2017
A privacy-preserving proximal gradient algorithm, which asynchronously updates models of the learning tasks, is introduced to solve a general class of MTL formulations.
Distributed Primal-Dual Optimization for Online Multi-Task Learning
- Computer ScienceAAAI
- 2020
An adaptive primal-dual algorithm, which not only captures task-specific noise in adversarial learning but also carries out a projection-free update with runtime efficiency, which is well-suited to decentralized periodic-connected tasks and allows the energy-starved or bandwidth-constraint tasks to postpone the update.
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
- Computer ScienceArXiv
- 2021
SpreadGNN is proposed, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature, and demonstrates the efficacy of the framework on a variety of non-I.I.D. distributed graph-level molecular property prediction datasets with partial labels.
Federated Multi-task Graph Learning
- Computer ScienceACM Trans. Intell. Syst. Technol.
- 2022
This work analyzes the convergence behavior with adaptive stepsize selection and non-convex assumption and proposes a federated multi-task graph learning (FMTGL) framework to solve the problem within a privacy-preserving and scalable scheme.
Communication-Efficient and Decentralized Multi-Task Boosting while Learning the Collaboration Graph
- Computer ScienceArXiv
- 2019
The approach trains nonlinear classifiers in a multi-task boosting manner without exchanging personal data and with low communication costs to jointly learn the personalized models and a sparse collaboration graph through an alternating optimization procedure.
Heterogeneous Semi-Asynchronous Federated Learning in Internet of Things: A Multi-Armed Bandit Approach
- Computer ScienceIEEE Transactions on Emerging Topics in Computational Intelligence
- 2022
A novel Heterogeneous Semi-Asynchronous FL mechanism, named HSA_FL, which uses a Multi-Armed Bandit approach to identify the heterogenous communication and computation capabilities of clients and proposes two aggregation rules, named adaptive update and fixed adaptive, respectively.
SpreadGNN: Decentralized Multi-Task Federated Learning for Graph Neural Networks on Molecular Data
- Computer ScienceAAAI
- 2022
This work proposes SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature.
References
SHOWING 1-10 OF 32 REFERENCES
Client–Server Multitask Learning From Distributed Datasets
- Computer ScienceIEEE Transactions on Neural Networks
- 2011
The proposed algorithmic framework, based on regularization and kernel methods, uses a suitable class of “mixed effect” kernels and is illustrated through a simulated recommendation system and an experiment involving pharmacological data coming from a multicentric clinical trial.
Heterogeneous multitask learning with joint sparsity constraints
- Computer ScienceNIPS
- 2009
This paper considers the problem of learning multiple related tasks of predicting both continuous and discrete outputs from a common set of input variables that lie in a high-dimensional feature space, and formulates this problem as a combination of linear regressions and logistic regressions.
Multitask Learning
- Computer ScienceMachine Learning
- 2004
Prior work on MTL is reviewed, new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals is presented, and new results for MTL with k-nearest neighbor and kernel regression are presented.
Clustered Multi-Task Learning Via Alternating Structure Optimization
- Computer ScienceNIPS
- 2011
The equivalence relationship between ASO and CMTL is shown, providing significant new insights into ASO as well as their inherent relationship, and the proposed convex CMTl formulation is significantly more efficient especially for high-dimensional data.
Collaborating between Local and Global Learning for Distributed Online Multiple Tasks
- Computer ScienceCIKM
- 2015
The experiments from 4 applications show that the proposed method of global learning can improve local learning significantly and is effective, since it can share knowledge among distributed tasks and obtain better models than learning them separately.
Regularized multi--task learning
- Computer ScienceKDD
- 2004
An approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines, that have been successfully used in the past for single-- task learning is presented.
Convex multi-task feature learning
- Computer ScienceMachine Learning
- 2007
It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution.
Multi-Task Feature Learning Via Efficient l2, 1-Norm Minimization
- Computer ScienceUAI
- 2009
This paper proposes to accelerate the computation of the l2, 1-norm regularized regression model by reformulating it as two equivalent smooth convex optimization problems which are then solved via the Nesterov's method---an optimal first-order black-box method for smooth conveX optimization.
Distributed optimization for multi-task learning via nuclear-norm approximation
- Computer Science
- 2015
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Computer ScienceFound. Trends Mach. Learn.
- 2011
It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.