Neural Random Forests
@article{Biau2016NeuralRF, title={Neural Random Forests}, author={G{\'e}rard Biau and Erwan Scornet and Johannes Welbl}, journal={Sankhya A}, year={2016}, volume={81}, pages={347 - 386} }
Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard…
88 Citations
Adaptive Bayesian Reticulum
- Computer Science
- 2019
A probabilistic construct is proposed that exploits the idea of a node's unexplained potential in order to decide where to expand further, mimicking the standard tree construction in a Neural Network setting, alongside a modified gradient ascent that first locally optimizes an expanded node before a global optimization.
Sparse Projection Oblique Randomer Forests
- Computer ScienceJ. Mach. Learn. Res.
- 2020
This work introduces yet another decision forest, called "Sparse Projection Oblique Randomer Forests" (SPORF), which typically yields improved performance over existing decision forests, while mitigating computational efficiency and scalability and maintaining interpretability.
Generalized Linear Splitting Rules in Decision Forests
- Computer Science
- 2018
This work establishes a generalized decision forest framework called Randomer Forests (RerFs), which encompasses RFs and many previously proposed decision forest algorithms as particular instantiations and proposes a default instantiation and provides theoretical and experimental evidence motivating its use.
Gradient Boosted Decision Tree Neural Network
- Computer ScienceArXiv
- 2019
The final model, Hammock, is surprisingly simple: a fully connected two layers neural network where the input is quantized and one-hot encoded and can achieve performance similar to that of Gradient Boosted Decision Trees.
Deep Neural Network Initialization With Decision Trees
- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2019
By combining the user-friendly features of decision tree models with the flexibility and scalability of deep neural networks, DJINN is an attractive algorithm for training predictive models on a wide range of complex data sets.
Neural Random Forest Imitation
- Computer ScienceArXiv
- 2019
A new method for generating data from a random forest and learning a neural network that imitates it without any additional training data creates very efficient neural networks that learn the decision boundaries of a Random Forest Imitation.
Randomization as Regularization: A Degrees of Freedom Explanation for Random Forest Success
- Computer ScienceJ. Mach. Learn. Res.
- 2020
It is demonstrated that the additional randomness injected into individual trees serves as a form of implicit regularization, making random forests an ideal model in low signal-to-noise ratio (SNR) settings.
Improvement of the Deep Forest Classifier by a Set of Neural Networks
- Computer ScienceInformatica
- 2020
A Neural Random Forest (NeuRF) and a Neural Deep Forest (NeuDF) as classification algorithms, which combine an ensemble of decision trees and neural networks, are proposed in the paper. The main idea…
ForestNet - Automatic Design of Sparse Multilayer Perceptron Network Architectures Using Ensembles of Randomized Trees
- Computer ScienceACPR
- 2019
This paper introduces a mechanism for designing the architecture of a Sparse Multi-Layer Perceptron network, for classification, called ForestNet, and exhibits very promising results, as the sparse networks performed similarly to their fully connected counterparts with a reduction of more than 98% of connections in the visual tasks.
Using a Random Forest to Inspire a Neural Network and Improving on It
- Computer ScienceSDM
- 2017
It is shown that a carefully designed neural network with random forest structure can have better generalization ability and is more powerful than random forests, because the back-propagation algorithm reduces to a more powerful and generalized way of constructing a decision tree.
References
SHOWING 1-10 OF 41 REFERENCES
A random forest guided tour
- Computer Science
- 2015
The present article reviews the most recent theoretical and methodological developments for random forests, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures.
Consistency of Random Forests
- Computer Science
- 2015
A step forward in forest exploration is taken by proving a consistency result for Breiman's original algorithm in the context of additive regression models, and sheds an interesting light on how random forests can nicely adapt to sparsity.
Casting Random Forests as Artificial Neural Networks (and Profiting from It)
- Computer ScienceGCPR
- 2014
Formalizing a connection between Random Forests and ANN allows exploiting the former to initialize the latter, and parameter optimization within the ANN framework yields models that are intermediate betweenRF and ANN, and achieve performance better than RF and ANN on the majority of the UCI datasets used for benchmarking.
Random Forests
- Computer ScienceMachine Learning
- 2004
Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Quantile Regression Forests
- Computer Science, MathematicsJ. Mach. Learn. Res.
- 2006
It is shown here that random forests provide information about the full conditional distribution of the response variable, not only about the conditional mean, in order to be competitive in terms of predictive power.
On Oblique Random Forests
- Computer ScienceECML/PKDD
- 2011
This work proposes to employ "oblique" random forests (oRF) built from multivariate trees which explicitly learn optimal split directions at internal nodes using linear discriminative models, rather than using random coefficients as the original oRF.
Bayesian Additive Regression Trees
- Computer Science
- 2006
We develop a Bayesian \sum-of-trees" model where each tree is constrained by a regularization prior to be a weak learner, and fltting and inference are accomplished via an iterative Bayesian…
Entropy nets: from decision trees to neural networks
- Computer ScienceProc. IEEE
- 1990
How the mapping of decision trees into a multilayer neural network structure can be exploited for the systematic design of a class of layered neural networks, called entropy nets (which have far…
Do we need hundreds of classifiers to solve real world classification problems
- Computer Science
- 2014
The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in theTop-20, respectively).
Closed-form dual perturb and combine for tree-based models
- Computer ScienceICML
- 2005
A closed-form approximation of this scheme combined with cross-validation to tune the level of perturbation is proposed, which yields soft-tree models in a parameter free way.