Multi-target Regression via Random Linear Target Combinations

@inproceedings{Tsoumakas2014MultitargetRV,
  title={Multi-target Regression via Random Linear Target Combinations},
  author={Grigorios Tsoumakas and Eleftherios Spyromitros Xioufis and Aikaterini Vrekou and Ioannis P. Vlahavas},
  booktitle={ECML/PKDD},
  year={2014}
}
Multi-target regression is concerned with the simultaneous prediction of multiple continuous target variables based on the same set of input variables. It arises in several interesting industrial and environmental application domains, such as ecological modelling and energy forecasting. This paper presents an ensemble method for multi-target regression that constructs new target variables via random linear combinations of existing targets. We discuss the connection of our approach with multi… 

An Empirical Comparison on Multi-Target Regression Learning

TLDR
It is indicated that the single-target learning is a competitive baseline for multi- target regression learning on multi-target domains and among them, MTS performs the best, followed by RLTC, following by MORF.

Multi-target regression via input space expansion: treating targets as inputs

TLDR
This paper introduces two new methods for multi-target regression, called stacked single-target and ensemble of regressor chains, by adapting two popular multi-label classification methods of this family, and highlights an inherent problem of these methods—a discrepancy of the values of the additional input variables between training and prediction.

Ensembles for multi-target regression with random output selections

TLDR
The proposed ensemble extension can yield better predictive performance, reduce learning time or both, without a considerable change in model size, and gives best results when used with extremely randomized PCTs.

Conditionally Decorrelated Multi-Target Regression

TLDR
A novel MTR framework, termed as Conditionally Decorrelated Multi-Target Regression (CDMTR) is proposed, which learns from the MTR data following three elementary steps: clustering analysis, conditional target decorrelation and multi-target regression models induction.

Multi-Target Regression Rules With Random Output Selections

TLDR
The results show that FIRE-ROS can improve the predictive performance of the FIRE method and that it performs on par with state-of-the-art (non-interpretable) MTR methods.

Towards meta-learning for multi-target regression problems

TLDR
A meta-learning system to recommend the best predictive method for a given multi-target regression problem with a balanced accuracy superior to 70% using a Random Forest meta-model, which statistically outperformed the meta- learning baselines.

Rotation Forest for multi-target regression

TLDR
The Rotation Forest ensemble method, previously proposed for single-label classification and single-target regression, is adapted to MTR tasks and outperforms other popular ensembles, such as Bagging and Random Forest.

Multi-target regression via output space quantization

TLDR
The proposed method, called MRQ, is based on the idea of quantizing the output space in order to transform the multiple continuous targets into one or more discrete ones and can be flexibly parameterized to control the trade-off between prediction accuracy and computational efficiency.

Ensembles of Extremely Randomized Trees for Multi-target Regression

TLDR
This work considers the Extra-Tree ensembles – the overall top performer in the DREAM4 and DREAM5 challenges for gene network reconstruction and proposes to use predictive clustering trees (PCTs) – a generalization of decision trees for predicting structured outputs, including multiple continuous variables.
...

References

SHOWING 1-10 OF 38 REFERENCES

Multi-Label Classification Methods for Multi-Target Regression

TLDR
Two new multi-target regression algorithms are introduced: MTS and ensemble of regressor chains (ERC), inspired by two popular multi-label classification approaches that are based on a single-target decomposition of the multi- target problem and the idea of treating the other prediction targets as additional input variables that augment the input space.

Multi-target regression with rule ensembles

TLDR
The FIRE algorithm for solving multi-target regression problems is introduced, which employs the rule ensembles approach and is significantly more concise than random forests, and it is also possible to create compact rule sets that are smaller than a single regression tree but still comparable in accuracy.

Rule Ensembles for Multi-target Regression

TLDR
A new system for learning rule ensembles for multi-target regression problems that is significantly more concise than random forests, and it is also possible to create very small rule sets that are still comparable in accuracy to single regression trees.

Empirical Asymmetric Selective Transfer in Multi-objective Decision Trees

TLDR
Empirical Asymmetric Selective Transfer (EAST) is proposed, a generally applicable algorithm that approximates the best subset of the other targets that, when combined with the main target in a multi-target model, results in the most accurate model for the maintarget.

Drawing Parallels between Multi-Label Classification and Multi-Target Regression

TLDR
This talk will emphasize this tight relationship between multi-label classification and multi-target regression by discussing their similarities, while also highlighting their differences, and further discuss techniques that can inherently handle both tasks.

Ensembles of Multi-Objective Decision Trees

TLDR
This paper considers two ensemble learning techniques, bagging and random forests, and applies them to multi-objective decision trees (MODTs), which are decision trees that predict multiple target attributes at once and concludes that ensembles of MODTs yield better predictive performance than MODTs and are equally good, or better than ensembled of single-objectives decision trees.

Stepwise Induction of Multi-target Model Trees

TLDR
Experiments show that multi-target model trees are much smaller than the corresponding sets of single- target model trees and are induced much faster, while achieving comparable accuracies.

Learning Classification Rules for Multiple Target Attributes

TLDR
This work proposes a new rule learning algorithm, which (unlike existing rule learning approaches) generalizes to multiple target prediction and empirically evaluates the new method and shows that rule sets for several target prediction yield comparable accuracy to the respective collection of single target rule sets.

A Dirty Model for Multi-task Learning

We consider multi-task learning in the setting of multiple linear regression, and where some relevant features could be shared across the tasks. Recent research has studied the use of l1/lq norm