• Corpus ID: 59158841

Stochastic Gradient Trees

@article{Gouk2019StochasticGT,
  title={Stochastic Gradient Trees},
  author={Henry Gouk and Bernhard Pfahringer and Eibe Frank},
  journal={ArXiv},
  year={2019},
  volume={abs/1901.07777}
}
We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision. In contrast to previous approaches to gradient-based tree learning, our method operates in the incremental learning setting rather than the batch learning setting, and does not make use of soft splits or require the construction of a new tree for every update. We demonstrate how one can apply these decision trees to different problems by changing only the loss function, using… 

Figures and Tables from this paper

Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent
TLDR
This work proposes a reformulation of the tree learning problem that provides better conditioned gradients, and leverages successful deep network learning techniques like overparameterization and straight-through estimators, and allows the solution to be deployed in disparate tree learning settings like supervised batch learning and online bandit feedback based learning.
Boosted Rule Learning for Multi-Label Classification using Stochastic Gradient Descent
  • Computer Science
  • 2021
TLDR
This work is proposing and developing an extension to the original algorithm in order to facilitate the modification of rules that have already been learned and determining whether or not these concepts also work on a stochastic gradient descent boosted rule learner for multi-label classification and can yield better overall models while possibly using less complex rules than were required previously.
Using dynamical quantization to perform split attempts in online tree regressors
Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic Regression
TLDR
This work proposes Probabilistic Gradient Boosting Machines (PGBM), a method to create probabilistic predictions with a single ensemble of decision trees in a computationally efficient manner and empirically demonstrates the advantages of PGBM compared to existing state-of-the-art methods.
Online Multi-target regression trees with stacked leaf models
TLDR
This work proposes a novel online learning strategy for multi-target regression in data streams, called Stacked Single-target Hoeffding Tree (SST-HT), which extends existing online decision tree learning algorithm to explore inter-target dependencies while making predictions.
Dynamic Model Tree for Interpretable Data Stream Learning
TLDR
The novel framework, called Dynamic Model Tree, satisfies desirable consistency and minimality properties and can drastically reduce the number of splits required by existing incremental decision trees, making it a powerful online learning framework that contributes to more lightweight and interpretable machine learning in data streams.
Explaining deep reinforcement learning decisions in complex multiagent settings: towards enabling automation in air traffic flow management
TLDR
The paper presents how major challenges of scalability and complexity are addressed, and provides results from evaluation tests that show the abilities of models to provide high-quality solutions and high-fidelity explanations.

References

SHOWING 1-10 OF 41 REFERENCES
Adaptive Learning from Evolving Data Streams
TLDR
A method for developing algorithms that can adaptively learn from data streams that drift over time, based on using change detectors and estimator modules at the right places and choosing implementations with theoretical guarantees in order to extend such guarantees to the resulting adaptive learning algorithm.
Beyond Trees: Adopting MITI to Learn Rules and Ensemble Classifiers for Multi-Instance Data
TLDR
This paper revisits the basic algorithm of MITI, and considers the effect of parameter settings on classification accuracy, using several benchmark datasets and presents randomized algorithm variants that enable the algorithm to generate ensemble classifiers.
Speeding Up and Boosting Diverse Density Learning
TLDR
This paper revisits the idea of performing multi-instance classification based on a point-and-scaling concept by searching for the point in instance space with the highest diverse density by using simple variants of existing algorithms to find diverse density maxima more efficiently.
Extremely Fast Decision Tree
TLDR
Hoeffding Anytime Tree produces the asymptotic batch tree in the limit, is naturally resilient to concept drift, and can be used as a higher accuracy replacement for Hoeffing Tree in most scenarios, at a small additional computational cost.
Multi-instance tree learning
TLDR
A novel algorithm for decision tree learning in the multi- instance setting as originally defined by Dietterich et al. is introduced and it is shown that the resulting system outperforms the existing multi-instance decision tree learners.
Greedy function approximation: A gradient boosting machine.
TLDR
A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion, and specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification.
Accurate Ensembles for Data Streams: Combining Restricted Hoeffding Trees using Stacking
TLDR
The success of simple methods for classification shows that is is often not necessary to model complex attribute interactions to obtain good classification accuracy on practical problems, so an ensemble of Hoeffding trees that are each limited to a small subset of attributes is proposed.
Deep Neural Decision Trees
TLDR
This work presents Deep Neural Decision Trees (DNDT) -- tree models realised by neural networks, which can be easily implemented in NN toolkits, and trained with gradient descent rather than greedy splitting.
Deep Forest: Towards An Alternative to Deep Neural Networks
In this paper, we propose gcForest, a decision tree ensemble approach with performance highly competitive to deep neural networks in a broad range of tasks. In contrast to deep neural networks which
Online Multi-target regression trees with stacked leaf models
TLDR
This work proposes a novel online learning strategy for multi-target regression in data streams, called Stacked Single-target Hoeffding Tree (SST-HT), which extends existing online decision tree learning algorithm to explore inter-target dependencies while making predictions.
...
...