Maximum Roaming Multi-Task Learning

@inproceedings{Pascal2021MaximumRM,
  title={Maximum Roaming Multi-Task Learning},
  author={Lucas Pascal and Pietro Michiardi and Xavier Bost and Benoit Huet and Maria A. Zuluaga},
  booktitle={AAAI},
  year={2021}
}
Multi-task learning has gained popularity due to the advantages it provides with respect to resource usage and performance. Nonetheless, the joint optimization of parameters with respect to multiple tasks remains an active research topic. Sub-partitioning the parameters between different tasks has proven to be an efficient way to relax the optimization constraints over the shared weights, may the partitions be disjoint or overlapping. However, one drawback of this approach is that it can weaken… 

Figures and Tables from this paper

Optimization Strategies in Multi-Task Learning: Averaged or Separated Losses?

This work investigates the benefits of alternating independent gradient descent steps on the different task-specific objective functions and forms a novel way to combine this approach with state-of-the-art optimizers, and proposes a random task grouping as a trade-off between better optimization and computational efficiency.

Multi-Task Learning with Deep Neural Networks: A Survey

An overview of multi-task learning methods for deep neural networks is given, with the aim of summarizing both the well-established and most recent directions within the field.

Multi-task deep learning for glaucoma detection from color fundus images

This work aims at designing and training a novel multi-task deep learning model that leverages the similarities of related eye-fundus tasks and measurements used in glaucoma diagnosis, and outperforms other multi- task learning models, and its performance pairs with trained experts.

Deep Safe Multi-Task Learning

This paper proposes a Deep Safe Multi-Task Learning (DSMTL) model with two learning strategies: individual learning and joint learning, and theoretically studies the safeness of both learning strategies in the DSMTL model to show that the proposed methods can achieve some versions of safe multi-task learning.

References

SHOWING 1-10 OF 40 REFERENCES

Multi-Task Learning as Multi-Objective Optimization

This paper proposes an upper bound for the multi-objective loss and shows that it can be optimized efficiently, and proves that optimizing this upper bound yields a Pareto optimal solution under realistic assumptions.

Many Task Learning With Task Routing

This paper introduces Many Task Learning (MaTL) as a special case of MTL where more than 20 tasks are performed by a single model and applies a conditional feature-wise transformation over the convolutional activations that enables a model to successfully perform a large number of tasks.

Attentive Single-Tasking of Multiple Tasks

In this work we address task interference in universal networks by considering that a network is trained on multiple tasks, but performs one task at a time, an approach we refer to as

End-To-End Multi-Task Learning With Attention

The proposed Multi-Task Attention Network (MTAN) consists of a single shared network containing a global feature pool, together with a soft-attention module for each task, which allows learning of task-specific feature-level attention.

MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning

This paper argues about the importance of considering task interactions at multiple scales when distilling task information in a multi-task learning setup, and proposes a novel architecture, namely MTI-Net, that builds upon this finding in three ways.

Learning to Multitask

Experiments on benchmark datasets show the effectiveness of the proposed L2MT framework, which uses a proposed layerwise graph neural network to learn task embeddings for all the tasks in a multitask problem and learns an estimation function to estimate the relative test error.

Multitask Learning

Prior work on MTL is reviewed, new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals is presented, and new results for MTL with k-nearest neighbor and kernel regression are presented.

Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

A principled approach to multi-task deep learning is proposed which weighs multiple loss functions by considering the homoscedastic uncertainty of each task, allowing us to simultaneously learn various quantities with different units or scales in both classification and regression settings.

Stochastic Filter Groups for Multi-Task CNNs: Learning Specialist and Generalist Convolution Kernels

This paper proposes "stochastic filter groups" (SFG), a mechanism to assign convolution kernels in each layer to "specialist" and "generalist" groups, which are specific to and shared across different tasks, respectively.

Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights

This work learns binary masks that “piggyback” on an existing network, or are applied to unmodified weights of that network to provide good performance on a new task, and shows performance comparable to dedicated fine-tuned networks for a variety of classification tasks.