# Multitask Learning

@article{Caruana2004MultitaskL, title={Multitask Learning}, author={Rich Caruana}, journal={Machine Learning}, year={2004}, volume={28}, pages={41-75} }

Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. [... ] Key Result Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. Expand

## 574 Citations

Inferring latent task structure for Multitask Learning by Multiple Kernel Learning

- Computer Science, BiologyBMC Bioinformatics
- 2010

A novel approach to Multitask Learning that is capable of learning task similarity along with the classifiers and is able to identify task similarities in absence of such prior information is presented.

Multitask learning over graphs

- Computer ScienceArXiv
- 2020

The article shows how cooperation steers the network limiting point and how different cooperation rules allow to promote different task relatedness models and explains how and when cooperation over multitask networks outperforms non-cooperative strategies.

A Survey on Multi-Task Learning

- Computer ScienceArXiv
- 2017

A survey for MTL is given, which classifies different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approaches, task relation learning approaches, and decomposition approach, and then discusses the characteristics of each approach.

Multitask learning over shared subspaces

- Computer Science, PsychologybioRxiv
- 2020

It is proposed that the concept of shared subspace provides a useful framework for the experimental study of human multitask and transfer learning, and the roles of subspaces are highlighted, showing how they could act as a learning boost if shared, and be detrimental if not.

A Regularization Approach to Learning Task Relationships in Multitask Learning

- Computer ScienceACM Trans. Knowl. Discov. Data
- 2013

A regularization approach to learning the relationships between tasks in multitask learning that can also describe negative task correlation and identify outlier tasks based on the same underlying principle is proposed.

Representation Learning via Semi-Supervised Autoencoder for Multi-task Learning

- Computer Science2015 IEEE International Conference on Data Mining
- 2015

This paper proposes a feature representation learning framework, which has the ability in combining the autoencoders, an effective way to learn good representation by using large amount of unlabeled data, and model parameter regularization methods into a unified model for multi-task learning.

Navigating the Trade-Off between Multi-Task Learning and Learning to Multitask in Deep Neural Networks

- Computer ScienceArXiv
- 2020

This work builds on previous work involving shallow networks and simple task settings suggesting that there is a trade-off between multi-task learning and multitasking, mediated by the use of shared versus separated representations, and shows that the same tension arises in deep networks.

Efficient Multitask Feature and Relationship Learning

- Computer ScienceUAI
- 2019

This paper proposes an efficient coordinate-wise minimization algorithm that has a closed form solution for each block subproblem, and provides a nonlinear extension that is able to achieve better generalization than existing methods.

Bioinspired Architecture Selection for Multitask Learning

- Computer ScienceFront. Neuroinform.
- 2017

A new method to completely design MTL architectures, by including the selection of the most helpful subtasks for the learning of the main task, and the optimal network connections, which realizes a complete design of the MTL schemes.

and Adaptive Methods for Multitask Learning

- Computer Science
- 2017

The primary focus of this thesis is to scale the multitask and lifelong learning to practical applications where both the tasks and the examples of the tasks arrive in an online fashion.

## References

SHOWING 1-10 OF 99 REFERENCES

Discovering Structure in Multiple Learning Tasks: The TC Algorithm

- Computer ScienceICML
- 1996

The task-clustering algorithm TC clusters learning tasks into classes of mutually related tasks, and outperforms its non-selective counterpart in situations where only a small number of tasks is relevant.

Learning Many Related Tasks at the Same Time with Backpropagation

- Computer ScienceNIPS
- 1994

This work shows that a backprop net learning many related tasks at the same time can use these tasks as inductive bias for each other and thus learn better and give empirical evidence that multitask backprop generalizes better in real domains.

Solving Multiclass Learning Problems via Error-Correcting Output Codes

- Computer ScienceJ. Artif. Intell. Res.
- 1995

It is demonstrated that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.

Learning internal representations

- Computer ScienceCOLT '95
- 1995

It is proved that the number of examples required to ensure good generalisation from a representation learner obeys and that gradient descent can be used to train neural network representations and experiment results are reported providing strong qualitative support for the theoretical results.

Goal-directed clustering

- Computer Science

This paper presents DP 1, an incremental clustering algorithm that accepts a description of the expected performance task-the goal of learning-and uses that description to alter its learning bias based on a distribution of prediction tests.

Lifelong Learning: A Case Study.

- Computer Science
- 1995

This paper investigates learning in a lifelong context where a learner faces a stream of learning tasks and proposes and evaluates several approaches to lifelong learning that generalize consistently more accurately from scarce training data than comparable "single-task" approaches.

Explanation-based neural network learning

- Computer Science
- 1996

This chapter introduces the major learning approach studied in this book: the explanation-based neural network learning algorithm (EBNN), which approaches the meta-level learning problem by learning a theory of the domain that characterizes the relevance of individual features, their cross-dependencies, or certain invariant properties of the Domain.

Learning One More Thing

- Computer ScienceIJCAI
- 1995

Results on learning to recognize objects from color images demonstrate superior generalization capabilities if invariances are learned and used to bias subsequent learning.

A Bayesian/information theoretic model of bias learning

- Computer ScienceCOLT '96
- 1996

In this paper the problem of learning appropriate bias for an environment of related tasks is examined from a Bayesian perspective and it is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by sampling from the objective prior.