Survey on Multi-Output Learning

@article{Xu2019SurveyOM,
  title={Survey on Multi-Output Learning},
  author={Donna Xu and Yaxin Shi and Ivor Wai-Hung Tsang and Y. Ong and Chen Gong and Xiaobo Shen},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2019},
  volume={31},
  pages={2409-2429}
}
  • Donna XuYaxin Shi Xiaobo Shen
  • Published 2 January 2019
  • Computer Science
  • IEEE Transactions on Neural Networks and Learning Systems
The aim of multi-output learning is to simultaneously predict multiple outputs given an input. It is an important learning problem for decision-making since making decisions in the real world often involves multiple complex factors and criteria. In recent times, an increasing number of research studies have focused on ways to predict multiple outputs at once. Such efforts have transpired in different forms according to the particular multi-output learning problem under study. Classic cases of… 

Figures and Tables from this paper

Component-Wise Boosting of Targets for Multi-Output Prediction

This paper introduces an algorithm that uses the problem transformation method for multi-output prediction, while simultaneously learning the dependencies between target variables in a sparse and interpretable manner.

Optimistic bounds for multi-output prediction

It is shown that the self-bounding Lipschitz condition gives rise to optimistic bounds for multi-output learning, which are minimax optimal up to logarithmic factors.

Deep tree-ensembles for multi-output prediction

A Survey on Extreme Multi-label Learning

A formal definition for XML from the perspective of supervised learning is clarified, and possible research directions in XML, such as new evaluation metrics, the tail label problem, and weakly supervised XML are proposed.

The Emerging Trends of Multi-Label Learning

There have been a lack of systemic studies that focus explicitly on analyzing the emerging trends and new challenges of multi-label learning in the era of big data, and it is imperative to call for a comprehensive survey to fulfil this mission.

Feature selection for semi-supervised multi-target regression using genetic algorithm

A Genetic Algorithm based semi-supervised technique on multi-target regression problems to predict new targets, using very small number of labelled examples by incorporating GA with MTR-SAFER is proposed.

Safe Active Learning for Multi-Output Gaussian Processes

This work proposes a safe active learning approach for multi-output Gaussian process regression that queries the most informative data or output taking the related-ness between the regressors and safety constraints into account and shows improved convergence compared to its competitors.

Multi-Dimensional Classification via kNN Feature Augmentation

A first attempt towards feature manipulation for MDC is proposed which enriches the original feature space with kNNaugmented features, and results clearly show that the classification performance of existing MDC approaches can be significantly improved by incorporating kNN-augmenting features.

Md-knn: An Instance-based Approach for Multi-Dimensional Classification

A first attempt towards adapting instance-based techniques for MDC is investigated, and a new approach named Md-knn is proposed, which identifies unseen instance's nearest neighbors and obtains its corresponding $k\text{NN}$ counting statistics for each class space.

Maximum Margin Multi-Dimensional Classification

A first attempt toward adapting maximum margin techniques for MDC problem and a novel approach named M3MDC is proposed, which maximizes the margins between each pair of class labels with respect to individual class variable while modeling relationship across class variables via covariance regularization.
...

References

SHOWING 1-10 OF 314 REFERENCES

A Review on Multi-Label Learning Algorithms

This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms with relevant analyses and discussions.

Remarks on multi-output Gaussian process regression

Multitask Learning

Prior work on MTL is reviewed, new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals is presented, and new results for MTL with k-nearest neighbor and kernel regression are presented.

Large Margin Methods for Structured and Interdependent Output Variables

This paper proposes to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation and presents a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems.

Maximizing Subset Accuracy with Recurrent Neural Networks in Multi-label Classification

This paper replaces classifier chains with recurrent neural networks, a sequence-to-sequence prediction algorithm which has recently been successfully applied to sequential prediction tasks in many domains, and compares different ways of ordering the label set, and gives some recommendations on suitable ordering strategies.

ADIOS: Architectures Deep In Output Space

This paper proposes to make use of the underlying structure of binary classification by learning to partition the labels into a Markov Blanket Chain and then applying a novel deep architecture that exploits the partition.

A Novel Online Stacked Ensemble for Multi-Label Stream Classification

This study introduces a novel online and dynamically-weighted stacked ensemble for multi-label classification, called GOOWE-ML, that utilizes spatial modeling to assign optimal weights to its component classifiers.

Scalable and efficient multi-label classification for evolving data streams

This paper proposes a new experimental framework for learning and evaluating on multi-label data streams, and uses it to study the performance of various methods, and develops a multi- Label Hoeffding tree with multi- label classifiers at the leaves.

Gradient Boosted Decision Trees for High Dimensional Sparse Output

This paper studies the gradient boosted decision trees (GBDT) when the output space is high dimensional and sparse, and proposes a new GBDT variant, GBDT-SPARSE, to resolve this problem by employing L0 regularization.

Conditional Bernoulli Mixtures for Multi-label Classification

This paper proposes a new multi-label classification method based on Conditional Bernoulli Mixtures that captures label dependencies and derives an efficient prediction procedure based on dynamic programming, thus avoiding the cost of examining an exponential number of potential label subsets.
...