Scalable Pareto Front Approximation for Deep Multi-Objective Learning

  title={Scalable Pareto Front Approximation for Deep Multi-Objective Learning},
  author={Michael Ruchte and Josif Grabocka},
  journal={2021 IEEE International Conference on Data Mining (ICDM)},
Multi-objective optimization is important for various Deep Learning applications, however, no prior multi-objective method suits very deep networks. Existing approaches either require training a new network for every solution on the Pareto front or add a considerable overhead to the number of parameters by introducing hyper-networks conditioned on modifiable preferences. In this paper, we present a novel method that contextualizes the network directly on the preferences by adding them to the… 
A Multi-objective / Multi-task Learning Framework Induced by Pareto Stationarity
A novel and generic framework to discover a PO solution with multiple forms of preferences is developed that allows for a generic MOO/MTL problem to express a preference, which is solved to achieve the preference and PO.
Multi-objective ranking with directions of preferences
This paper applies several MOO-PD methods such as the Exact Pareto Optimal search, etc. to LTR, and proposes a novel model evaluation metric, which is referred to as the maximum weighted loss, which may benefit many application use cases in practice.
Multi-task problems are not multi-objective
This work shows that MTL problems do not resemble the characteristics of MOO problems, and a single model can perform just as well as optimizing all objectives with independent models, rendering MOO inapplicable.
Towards Fairness-Aware Multi-Objective Optimization
This paper starts with a discussion of user preferences in multi-Objective optimization and then explores its relationship to fairness in machine learning and multi-objective optimization, further elaborating the importance of fairness in traditional multi- Objectives optimization, data-driven optimization and federated optimization.


Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization
This work develops the first gradient-based multi-objective MTL algorithm that combines multiple gradient descent with carefully controlled ascent to traverse the Pareto front in a principled manner, which also makes it robust to initialization.
Learning the Pareto Front with Hypernetworks
The problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training is tackled, and PFL opens the door to new applications where models are selected based on preferences that are only available at run time.
Efficient Continuous Pareto Exploration in Multi-Task Learning
This work proposes a sample-based sparse linear system, for which standard Hessian-free solvers in machine learning can be applied and reveals the primary directions in local Pareto sets for trade-off balancing, finds more solutions with different trade-offs efficiently, and scales well to tasks with millions of parameters.
Multi-objective Reinforcement Learning through Continuous Pareto Manifold Approximation
This paper proposes a reinforcement learning policy gradient approach to learn a continuous approximation of the Pareto frontier in multi-objective Markov Decision Problems (MOMDPs) by optimizing the parameters of a function defining a manifold in the policy parameters space so that the corresponding image in the objectives space gets as close as possible to the true Pare to frontier.
Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution
This work proposes LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method.
Controllable Pareto Multi-Task Learning
This work proposes a novel controllable Pareto multi-task learning framework, to enable the system to make real-time trade-off switch among different tasks with a single model.
A Generalized Algorithm for Multi-Objective Reinforcement Learning and Policy Adaptation
A generalized version of the Bellman equation is proposed to learn a single parametric representation for optimal policies over the space of all possible preferences in MORL, with the goal of enabling few-shot adaptation to new tasks.
GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks
A gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes is presented, showing that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks.
Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
A principled approach to multi-task deep learning is proposed which weighs multiple loss functions by considering the homoscedastic uncertainty of each task, allowing us to simultaneously learn various quantities with different units or scales in both classification and regression settings.
Multi-objective reinforcement learning using sets of pareto dominating policies
A novel temporal difference learning algorithm that integrates the Pareto dominance relation into a reinforcement learning approach and outperforms current state-of-the-art MORL algorithms with respect to the hypervolume of the obtained policies.