Asynchronous Multi-task Learning

@article{Baytas2016AsynchronousML,
  title={Asynchronous Multi-task Learning},
  author={Inci M. Baytas and Ming Yan and Anil K. Jain and Jiayu Zhou},
  journal={2016 IEEE 16th International Conference on Data Mining (ICDM)},
  year={2016},
  pages={11-20}
}
Many real-world machine learning applications involveseveral learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the… 

Figures and Tables from this paper

Distributed Multi-Task Relationship Learning

This paper proposes a distributed multi-task learning framework that simultaneously learns predictive models for each task as well as task relationships between tasks alternatingly in the parameter server paradigm and proposes a communication-efficient primal-dual distributed optimization algorithm to solve theDual problem by carefully designing local subproblems to make the dual problem decomposable.

Private Multi-Task Learning: Formulation and Applications to Federated Learning

This work formalizes notions of client-level privacy for MTL via joint differential privacy (JDP), a relaxation of differential privacy for mechanism design and distributed optimization, and proposes an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning, subject to JDP.

Model-Protected Multi-Task Learning

A privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix, which can be guaranteed not to underperform compared with STL methods.

Privacy-Preserving Distributed Multi-Task Learning with Asynchronous Updates

A privacy-preserving proximal gradient algorithm, which asynchronously updates models of the learning tasks, is introduced to solve a general class of MTL formulations.

Distributed Primal-Dual Optimization for Online Multi-Task Learning

An adaptive primal-dual algorithm, which not only captures task-specific noise in adversarial learning but also carries out a projection-free update with runtime efficiency, which is well-suited to decentralized periodic-connected tasks and allows the energy-starved or bandwidth-constraint tasks to postpone the update.

SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks

SpreadGNN is proposed, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature, and demonstrates the efficacy of the framework on a variety of non-I.I.D. distributed graph-level molecular property prediction datasets with partial labels.

Federated Multi-task Graph Learning

This work analyzes the convergence behavior with adaptive stepsize selection and non-convex assumption and proposes a federated multi-task graph learning (FMTGL) framework to solve the problem within a privacy-preserving and scalable scheme.

Communication-Efficient and Decentralized Multi-Task Boosting while Learning the Collaboration Graph

The approach trains nonlinear classifiers in a multi-task boosting manner without exchanging personal data and with low communication costs to jointly learn the personalized models and a sparse collaboration graph through an alternating optimization procedure.

Heterogeneous Semi-Asynchronous Federated Learning in Internet of Things: A Multi-Armed Bandit Approach

A novel Heterogeneous Semi-Asynchronous FL mechanism, named HSA_FL, which uses a Multi-Armed Bandit approach to identify the heterogenous communication and computation capabilities of clients and proposes two aggregation rules, named adaptive update and fixed adaptive, respectively.

SpreadGNN: Decentralized Multi-Task Federated Learning for Graph Neural Networks on Molecular Data

This work proposes SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature.

References

SHOWING 1-10 OF 32 REFERENCES

Client–Server Multitask Learning From Distributed Datasets

The proposed algorithmic framework, based on regularization and kernel methods, uses a suitable class of “mixed effect” kernels and is illustrated through a simulated recommendation system and an experiment involving pharmacological data coming from a multicentric clinical trial.

Heterogeneous multitask learning with joint sparsity constraints

This paper considers the problem of learning multiple related tasks of predicting both continuous and discrete outputs from a common set of input variables that lie in a high-dimensional feature space, and formulates this problem as a combination of linear regressions and logistic regressions.

Multitask Learning

Prior work on MTL is reviewed, new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals is presented, and new results for MTL with k-nearest neighbor and kernel regression are presented.

Clustered Multi-Task Learning Via Alternating Structure Optimization

The equivalence relationship between ASO and CMTL is shown, providing significant new insights into ASO as well as their inherent relationship, and the proposed convex CMTl formulation is significantly more efficient especially for high-dimensional data.

Collaborating between Local and Global Learning for Distributed Online Multiple Tasks

The experiments from 4 applications show that the proposed method of global learning can improve local learning significantly and is effective, since it can share knowledge among distributed tasks and obtain better models than learning them separately.

Regularized multi--task learning

An approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines, that have been successfully used in the past for single-- task learning is presented.

Convex multi-task feature learning

It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution.

Multi-Task Feature Learning Via Efficient l2, 1-Norm Minimization

This paper proposes to accelerate the computation of the l2, 1-norm regularized regression model by reformulating it as two equivalent smooth convex optimization problems which are then solved via the Nesterov's method---an optimal first-order black-box method for smooth conveX optimization.

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.