Distributed Multi-Task Relationship Learning

@article{Liu2017DistributedMR,
  title={Distributed Multi-Task Relationship Learning},
  author={Sulin Liu and Sinno Jialin Pan and Qirong Ho},
  journal={Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
  year={2017}
}
Multi-task learning aims to learn multiple tasks jointly by exploiting their relatedness to improve the generalization performance for each task. Traditionally, to perform multi-task learning, one needs to centralize data from all the tasks to a single machine. However, in many real-world applications, data of different tasks may be geo-distributed over different local machines. Due to heavy communication caused by transmitting the data and the issue of data privacy and security, it is… 

Figures and Tables from this paper

Differentially Private Distributed Multi-Task Relationship Learning

A distributed multi-task relationship learning approach under differential privacy called DRUPE, where privacy protection is achieved through perturbing the gradients at each task node, and a task relationship calibration method which uses the least-squares approximation algorithm to calibrate the inaccurate pairwise relationships between the tasks.

Distributed Primal-Dual Optimization for Online Multi-Task Learning

An adaptive primal-dual algorithm, which not only captures task-specific noise in adversarial learning but also carries out a projection-free update with runtime efficiency, which is well-suited to decentralized periodic-connected tasks and allows the energy-starved or bandwidth-constraint tasks to postpone the update.

Decentralized Multi-Task Learning Based on Extreme Learning Machines

The ELM based MTL problem is presented in the centralized setting, and the proposed DMTL-ELM algorithm, which is a hybrid Jacobian and Gauss-Seidel Proximal multi-block alternating direction method of multipliers (ADMM), is proposed to solve the problem.

Communication-Efficient Coded Distributed Multi - Task Learning

A novel scheme named coded distributed multi-task learning is proposed, to reduce the communication loads both in the uplink and downlink and utilize a particular repetitive placement and computation on the publicly shared dataset such that coded multicasting opportunities can be created at the server and users.

A Survey on MultiTask Learning

A survey for MTL is given, which classifies different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approaches, task relation learning approaches, and decomposition approach, and then discusses the characteristics of each approach.

A Survey on Multi-Task Learning

  • Yu ZhangQiang Yang
  • Computer Science
    IEEE Transactions on Knowledge and Data Engineering
  • 2022
A survey for MTL from the perspective of algorithmic modeling, applications and theoretical analyses, which gives a definition of MTL and classify different MTL algorithms into five categories, including feature learning approach, low-rank approach, task clustering approach,task relation learning approach and decomposition approach.

Learning Task Relatedness in Multi-Task Learning for Images in Context

This work introduces Selective Sharing, a method that learns the inter-task relatedness from secondary latent features while the model trains and can automatically group tasks and allow them to share knowledge in a mutually beneficial way.

Federated Multi-task Graph Learning

This work analyzes the convergence behavior with adaptive stepsize selection and non-convex assumption and proposes a federated multi-task graph learning (FMTGL) framework to solve the problem within a privacy-preserving and scalable scheme.

Randomized Neural Networks Based Decentralized Multi-Task Learning via Hybrid Multi-Block ADMM

This work proposes the DMTL-RSF algorithm, which is a hybrid Jacobian and Gauss-Seidel Proximal multi-block alternating direction method of multipliers (ADMM), and demonstrates the convergence of presented algorithms, and also shows that they can outperform existing MTL methods.

Q-Learning Scheduler for Multi Task Learning Through the use of Histogram of Task Uncertainty

A deep Q-Learning Scheduler (QLS) is proposed that monitors the state of the tasks and the shared features using a novel histogram of task uncertainty, and through trial-and-error, learns an optimal policy for task scheduling.

References

SHOWING 1-10 OF 36 REFERENCES

Clustered Multi-Task Learning: A Convex Formulation

A new spectral norm is designed that encodes this a priori assumption that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors, resulting in a new convex optimization formulation for multi-task learning.

A Convex Formulation for Learning Task Relationships in Multi-Task Learning

This paper proposes a regularization formulation for learning the relationships between tasks in multi-task learning, called MTRL, which can also describe negative task correlation and identify outlier tasks based on the same underlying principle.

Asynchronous Multi-task Learning

This paper proposes a principled MTL framework for distributed and asynchronous optimization to address the aforementioned challenges, and shows that many regularized MTL formulations can benefit from this framework, including the low-rank MTL for sharedsubspace learning.

Distributed Multi-Task Learning with Shared Representation

We study the problem of distributed multi-task learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i.e.

Multi-Task Learning via Conic Programming

This paper proposes a novel MTL algorithm that makes use of a task network, which describes the relation structure among tasks and control the relatedness of the tasks locally, so all pairs of related tasks are guaranteed to have similar solutions.

Convex multi-task feature learning

It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution.

Learning Multiple Tasks with Kernel Methods

The experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task.

Parallel Multi-task Learning

  • Yu Zhang
  • Computer Science
    2015 IEEE International Conference on Data Mining
  • 2015
Experiments on several benchmark datasets show that the convergence of the proposed algorithms is as fast as that of SMO-style algorithms and the parallel design can speedup the computation.

Multitask Learning

Prior work on MTL is reviewed, new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals is presented, and new results for MTL with k-nearest neighbor and kernel regression are presented.

Regularized multi--task learning

An approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines, that have been successfully used in the past for single-- task learning is presented.