Corpus ID: 122685952


  author={Henrik Linusson},
The Random Forests ensemble predictor has proven to be well-suited for solving a multitude of different prediction problems. In this thesis, we propose an extension to the Random Forest framework t ... 
Multi-objective optimization of ensemble of regression trees using genetic algorithms
  • Qian Wan, R. Pal
  • Mathematics, Computer Science
  • 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)
  • 2014
The proposed methodology outperforms regular multivariate random forests in terms of correlation coefficients between predicted and experimental sensitivities and it is demonstrated that generating the Pareto-optimal front provides a choice of ensembles for different optimization objectives. Expand
Applying Multi-Output Random Forest Models to Electricity Price Forecast
Predicting electricity prices is a very important issue in modern society, because the associated decision process under uncertainty requires accurate forecasts for the economic agents involved. InExpand
Canonical Correlation Forests
This work introduces canonical correlation forests (CCFs), a new decision tree ensemble method for classification that out-perform axis-aligned random forests, other state-of-the-art tree ensemble methods and all of the 179 popular classifiers considered in a recent extensive survey. Expand
Machine Learning for Beam Based Mobility Optimization in NR
One option for enabling mobility between 5G nodes is to use a set of area-fixed reference beams in the downlink direction from each node. To save power these reference beams should be turned on onlExpand
Learning an object tracker with a random forest and simulated measurements
This paper considers random forest regression and applies it to an object tracking problem using bearing-range measurements and the performance of the random forest tracking is compared to a Kalman smoother and particle filter. Expand
Machine Learning-Aided Security Constrained Optimal Power Flow
Though many approaches have been proposed in recent decades to solve the full AC optimal power flow (OPF) problem, efficiently finding the solution still remains challenging due to its highlyExpand
A learning-augmented approach for AC optimal power flow
Abstract Due to the high nonlinearity of AC optimal power flow (OPF), numerous efforts have been made in recent decades to find efficient methods. Machine learning (ML) has proven to significantlyExpand
Confidence-Weighted Local Expression Predictions for Occlusion Handling in Expression Recognition and Action Unit Detection
This work proposes to train random forests upon spatially-constrained random local subspaces of the face to form a categorical expression-driven high-level representation that is combined to describe categorical facial expressions as well as action units (AUs). Expand
A walk through randomness for face analysis in unconstrained environments. (Etude des méthodes aléatoires pour l'analyse de visage en environnement non contraint)
Improvements over the very recent Neural Decision Forests framework, that include both a simplified training procedure as well as a new greedy evaluation procedure, that allows to dramatically improve the evaluation runtime, with applications for online learning and, deep learning convolutional neural network-based features for facial expression recognition aswell as feature point alignement. Expand
Application of advanced learning methods for detecting network configuration in a smart water distribution system
An IoT-based model with Deep Learning and automated test scenarios is extended, while showing the effective application and comparison of learning techniques on experimental data in this domain, to improve the quality of service in water distribution systems. Expand


Concurrent Learning of Large-Scale Random Forests
The random forest algorithm belongs to the class of ensemble learning methods that are embarassingly parallel, i.e., the learning task can be straightforwardly divided into subtasks that can be solExpand
Multivariate random forests
The genesis of, and motivation for, the random forest paradigm as an outgrowth from earlier tree‐structured techniques is outlined and an illustrative example from ecology is provided that showcases the improved fit and enhanced interpretation afforded by the random Forest framework. Expand
Random Forests
  • L. Breiman
  • Mathematics, Computer Science
  • Machine Learning
  • 2004
Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression. Expand
Classification and Regression Trees
This chapter discusses tree classification in the context of medicine, where right Sized Trees and Honest Estimates are considered and Bayes Rules and Partitions are used as guides to optimal pruning. Expand
The Random Subspace Method for Constructing Decision Forests
  • T. Ho
  • Mathematics, Computer Science
  • IEEE Trans. Pattern Anal. Mach. Intell.
  • 1998
A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity. Expand
Learning Multiple Tasks with Boosted Decision Trees
This work addresses the problem of multi-task learning with no label correspondence among tasks by modifying MT-Adaboost to combine Multi-task Decision Trees as weak learners and revise the information gain rule for learning decision trees in the multi- task setting. Expand
Bagging predictors
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand
Classification Trees for Multiple Binary Responses
Abstract Multiple binary responses arise from many applications for which an array of health related symptoms are of primary interest. These symptoms are usually correlated. I generalize theExpand
Boosting Multi-Task Weak Learners with Applications to Textual and Social Data
A novel multi-task learning algorithm called MT-Adaboost is proposed: it extends Adaboost algorithm Freund1999Short to theMulti-task setting, it uses as multi- Task weak classifier a multi- task decision stump, which allows to learn different dependencies between tasks for different regions of the learning space. Expand
Multi-Label Classification: An Overview
The task of multi-label classification is introduced, the sparse related literature is organizes into a structured presentation and comparative experimental results of certain multilabel classification methods are performed. Expand