# Survey on Multi-Output Learning

@article{Xu2019SurveyOM,
title={Survey on Multi-Output Learning},
author={Donna Xu and Yaxin Shi and Ivor Wai-Hung Tsang and Y. Ong and Chen Gong and Xiaobo Shen},
journal={IEEE Transactions on Neural Networks and Learning Systems},
year={2019},
volume={31},
pages={2409-2429}
}
• Published 2 January 2019
• Computer Science
• IEEE Transactions on Neural Networks and Learning Systems
The aim of multi-output learning is to simultaneously predict multiple outputs given an input. It is an important learning problem for decision-making since making decisions in the real world often involves multiple complex factors and criteria. In recent times, an increasing number of research studies have focused on ways to predict multiple outputs at once. Such efforts have transpired in different forms according to the particular multi-output learning problem under study. Classic cases of…
107 Citations

## Figures and Tables from this paper

• Computer Science
ArXiv
• 2019
This paper introduces an algorithm that uses the problem transformation method for multi-output prediction, while simultaneously learning the dependencies between target variables in a sparse and interpretable manner.
• Computer Science, Mathematics
ICML 2020
• 2020
It is shown that the self-bounding Lipschitz condition gives rise to optimistic bounds for multi-output learning, which are minimax optimal up to logarithmic factors.
• Computer Science
ArXiv
• 2022
A formal definition for XML from the perspective of supervised learning is clarified, and possible research directions in XML, such as new evaluation metrics, the tail label problem, and weakly supervised XML are proposed.
• Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
• 2022
There have been a lack of systemic studies that focus explicitly on analyzing the emerging trends and new challenges of multi-label learning in the era of big data, and it is imperative to call for a comprehensive survey to fulfil this mission.
• Computer Science
Applied Intelligence
• 2021
A Genetic Algorithm based semi-supervised technique on multi-target regression problems to predict new targets, using very small number of labelled examples by incorporating GA with MTR-SAFER is proposed.
• Computer Science
AISTATS
• 2022
This work proposes a safe active learning approach for multi-output Gaussian process regression that queries the most informative data or output taking the related-ness between the regressors and safety constraints into account and shows improved convergence compared to its competitors.
• Computer Science
AAAI
• 2019
A first attempt towards feature manipulation for MDC is proposed which enriches the original feature space with kNNaugmented features, and results clearly show that the classification performance of existing MDC approaches can be significantly improved by incorporating kNN-augmenting features.
• Computer Science
2020 25th International Conference on Pattern Recognition (ICPR)
• 2021
A first attempt towards adapting instance-based techniques for MDC is investigated, and a new approach named Md-knn is proposed, which identifies unseen instance's nearest neighbors and obtains its corresponding $k\text{NN}$ counting statistics for each class space.
• Computer Science
IEEE Transactions on Neural Networks and Learning Systems
• 2022
A first attempt toward adapting maximum margin techniques for MDC problem and a novel approach named M3MDC is proposed, which maximizes the margins between each pair of class labels with respect to individual class variable while modeling relationship across class variables via covariance regularization.

## References

SHOWING 1-10 OF 314 REFERENCES

• Computer Science
IEEE Transactions on Knowledge and Data Engineering
• 2014
This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms with relevant analyses and discussions.
Prior work on MTL is reviewed, new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals is presented, and new results for MTL with k-nearest neighbor and kernel regression are presented.
• Computer Science
J. Mach. Learn. Res.
• 2005
This paper proposes to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation and presents a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems.
• Computer Science
NIPS
• 2017
This paper replaces classifier chains with recurrent neural networks, a sequence-to-sequence prediction algorithm which has recently been successfully applied to sequential prediction tasks in many domains, and compares different ways of ordering the label set, and gives some recommendations on suitable ordering strategies.
• Computer Science
ICML
• 2016
This paper proposes to make use of the underlying structure of binary classification by learning to partition the labels into a Markov Blanket Chain and then applying a novel deep architecture that exploits the partition.
• Computer Science, Environmental Science
CIKM
• 2018
This study introduces a novel online and dynamically-weighted stacked ensemble for multi-label classification, called GOOWE-ML, that utilizes spatial modeling to assign optimal weights to its component classifiers.
• Computer Science
Machine Learning
• 2012
This paper proposes a new experimental framework for learning and evaluating on multi-label data streams, and uses it to study the performance of various methods, and develops a multi- Label Hoeffding tree with multi- label classifiers at the leaves.
• Computer Science
ICML
• 2017
This paper studies the gradient boosted decision trees (GBDT) when the output space is high dimensional and sparse, and proposes a new GBDT variant, GBDT-SPARSE, to resolve this problem by employing L0 regularization.
• Computer Science
ICML
• 2016
This paper proposes a new multi-label classification method based on Conditional Bernoulli Mixtures that captures label dependencies and derives an efficient prediction procedure based on dynamic programming, thus avoiding the cost of examining an exponential number of potential label subsets.