OK3: Méthode d’arbres à sortie noyau pour la prédiction de sorties structurées et l’apprentissage de noyau
@inproceedings{Geurts2006OK3MD, title={OK3: M{\'e}thode d’arbres {\`a} sortie noyau pour la pr{\'e}diction de sorties structur{\'e}es et l’apprentissage de noyau}, author={Pierre Geurts and Louis Wehenkel and Florence d'Alch{\'e}-Buc}, year={2006} }
Dans cet article, nous proposons une extension des methodes d'arbres pour la prediction de sorties structurees. Cette extension est basee sur l'utilisation d'un noyau sur la sortie de ces methodes qui leur permet de construire un arbre a la seule condition qu'un noyau puisse etre defini sur l'espace de sortie. Cet algorithme, appele OK3 (pour output kernel trees), generalise les arbres de classification et de regression ainsi que les methodes d'ensemble d'arbres. Il herite de plusieurs…
One Citation
Contribution de l'apprentissage par simulation à l'auto-adaptation des systèmes de production
- Political Science
- 2015
Pour rester performants et competitifs, les systemes de production doivent etre capables de s’adapter pour faire face aux changements tels que l’evolution de la demande des clients. Il leur est…
References
SHOWING 1-10 OF 23 REFERENCES
A general regression technique for learning transductions
- Computer ScienceICML
- 2005
A novel and conceptually cleaner formulation of kernel dependency estimation provides a simple framework for estimating the regression coefficients, and an efficient algorithm for computing the pre-image from the regression coefficient extends the applicability of Kernel dependency estimation to output sequences.
Ranking with Predictive Clustering Trees
- Computer ScienceECML
- 2002
This work proposes to use predictive clustering trees for ranking, as compared to existing ranking approaches which are instance-based, and also allows for an explanation of the predicted rankings.
Protein network inference from multiple genomic data: a supervised approach
- Biology, Computer ScienceISMB/ECCB
- 2004
A new method to infer protein networks from multiple types of genomic data based on a variant of kernel canonical correlation analysis is presented, which is shown to outperform other unsupervised protein network inference methods.
Extremely randomized trees
- Computer Science, MathematicsMachine Learning
- 2006
A new tree-based ensemble method for supervised classification and regression problems that consists of randomizing strongly both attribute and cut-point choice while splitting a tree node and builds totally randomized trees whose structures are independent of the output values of the learning sample.
Large Margin Methods for Structured and Interdependent Output Variables
- Computer ScienceJ. Mach. Learn. Res.
- 2005
This paper proposes to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation and presents a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems.
Kernel k-means: spectral clustering and normalized cuts
- Computer ScienceKDD
- 2004
The generality of the weighted kernel k-means objective function is shown, and the spectral clustering objective of normalized cut is derived as a special case, leading to a novel weightedkernel k-Means algorithm that monotonically decreases the normalized cut.
Tree-Structured Methods for Longitudinal Data
- Mathematics
- 1992
Abstract The thrust of tree techniques is the extraction of meaningful subgroups characterized by common covariate values and homogeneous outcome. For longitudinal data, this homogeneity can pertain…
Learning structured prediction models: a large margin approach
- Computer ScienceICML
- 2005
This work considers large margin estimation in a broad range of prediction models where inference involves solving combinatorial optimization problems, for example, weighted graph-cuts or matchings, and relies on the expressive power of convex optimization problems to compactly capture inference or solution optimality in structured prediction models.
Bagging Predictors
- Computer ScienceMachine Learning
- 2005
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.
Bagging predictors
- Computer ScienceMachine Learning
- 2004
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.