# Neural Random Forests

@article{Biau2016NeuralRF,
title={Neural Random Forests},
author={G{\'e}rard Biau and Erwan Scornet and Johannes Welbl},
journal={Sankhya A},
year={2016},
volume={81},
pages={347 - 386}
}
• Published 25 April 2016
• Computer Science
• Sankhya A
Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard…
88 Citations
• Computer Science
• 2019
A probabilistic construct is proposed that exploits the idea of a node's unexplained potential in order to decide where to expand further, mimicking the standard tree construction in a Neural Network setting, alongside a modified gradient ascent that first locally optimizes an expanded node before a global optimization.
• Computer Science
J. Mach. Learn. Res.
• 2020
This work introduces yet another decision forest, called "Sparse Projection Oblique Randomer Forests" (SPORF), which typically yields improved performance over existing decision forests, while mitigating computational efficiency and scalability and maintaining interpretability.
This work establishes a generalized decision forest framework called Randomer Forests (RerFs), which encompasses RFs and many previously proposed decision forest algorithms as particular instantiations and proposes a default instantiation and provides theoretical and experimental evidence motivating its use.
• Computer Science
ArXiv
• 2019
The final model, Hammock, is surprisingly simple: a fully connected two layers neural network where the input is quantized and one-hot encoded and can achieve performance similar to that of Gradient Boosted Decision Trees.
• Computer Science
IEEE Transactions on Neural Networks and Learning Systems
• 2019
By combining the user-friendly features of decision tree models with the flexibility and scalability of deep neural networks, DJINN is an attractive algorithm for training predictive models on a wide range of complex data sets.
• Computer Science
ArXiv
• 2019
A new method for generating data from a random forest and learning a neural network that imitates it without any additional training data creates very efficient neural networks that learn the decision boundaries of a Random Forest Imitation.
• Computer Science
J. Mach. Learn. Res.
• 2020
It is demonstrated that the additional randomness injected into individual trees serves as a form of implicit regularization, making random forests an ideal model in low signal-to-noise ratio (SNR) settings.
• Computer Science
Informatica
• 2020
A Neural Random Forest (NeuRF) and a Neural Deep Forest (NeuDF) as classification algorithms, which combine an ensemble of decision trees and neural networks, are proposed in the paper. The main idea
• Computer Science
ACPR
• 2019
This paper introduces a mechanism for designing the architecture of a Sparse Multi-Layer Perceptron network, for classification, called ForestNet, and exhibits very promising results, as the sparse networks performed similarly to their fully connected counterparts with a reduction of more than 98% of connections in the visual tasks.
• Computer Science
SDM
• 2017
It is shown that a carefully designed neural network with random forest structure can have better generalization ability and is more powerful than random forests, because the back-propagation algorithm reduces to a more powerful and generalized way of constructing a decision tree.

## References

SHOWING 1-10 OF 41 REFERENCES

• Computer Science
• 2015
The present article reviews the most recent theoretical and methodological developments for random forests, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures.
• Computer Science
• 2015
A step forward in forest exploration is taken by proving a consistency result for Breiman's original algorithm in the context of additive regression models, and sheds an interesting light on how random forests can nicely adapt to sparsity.
Formalizing a connection between Random Forests and ANN allows exploiting the former to initialize the latter, and parameter optimization within the ANN framework yields models that are intermediate betweenRF and ANN, and achieve performance better than RF and ANN on the majority of the UCI datasets used for benchmarking.
Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
It is shown here that random forests provide information about the full conditional distribution of the response variable, not only about the conditional mean, in order to be competitive in terms of predictive power.
• Computer Science
ECML/PKDD
• 2011
This work proposes to employ "oblique" random forests (oRF) built from multivariate trees which explicitly learn optimal split directions at internal nodes using linear discriminative models, rather than using random coefficients as the original oRF.
• Computer Science
• 2006
We develop a Bayesian \sum-of-trees" model where each tree is constrained by a regularization prior to be a weak learner, and fltting and inference are accomplished via an iterative Bayesian
How the mapping of decision trees into a multilayer neural network structure can be exploited for the systematic design of a class of layered neural networks, called entropy nets (which have far
• Computer Science
• 2014
The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in theTop-20, respectively).
• Computer Science
ICML
• 2005
A closed-form approximation of this scheme combined with cross-validation to tune the level of perturbation is proposed, which yields soft-tree models in a parameter free way.