• Corpus ID: 236956955

Manifold Oblique Random Forests: Towards Closing the Gap on Convolutional Deep Networks

  title={Manifold Oblique Random Forests: Towards Closing the Gap on Convolutional Deep Networks},
  author={Adam Li and Ronan Perry and Chester Huynh and Tyler M. Tomita and Ronak R. Mehta and Jes{\'u}s Arroyo and Jesse Patsolic and Benjamin Falk and Joshua T. Vogelstein},
. Decision forests, in particular random forests and gradient boosting trees have demonstrated state-of-the-art accuracy compared to other methods in many supervised learning scenarios. Forests dominate other methods in tabular data, that is, when the feature space is unstructured, so that the signal is invariant to a permutation of the feature indices. However, in structured data lying on a manifold—such as images, and time-series—deep networks, specifically convolutional deep networks… 

Figures from this paper



Sparse Projection Oblique Randomer Forests

This work introduces yet another decision forest, called "Sparse Projection Oblique Randomer Forests" (SPORF), which typically yields improved performance over existing decision forests, while mitigating computational efficiency and scalability and maintaining interpretability.

Deep Neural Decision Forests

A novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner by introducing a stochastic and differentiable decision tree model.

Random Projection Forests

This work introduces a generalization of many existing decision tree methods called "Random Projection Forests" (RPF), which is any decision forest that uses (possibly data dependent and random) linear projections, and introduces a special case, called Lumberjack, using very sparse random projections, that is, linear combinations of a small subset of features.

On Oblique Random Forests

This work proposes to employ "oblique" random forests (oRF) built from multivariate trees which explicitly learn optimal split directions at internal nodes using linear discriminative models, rather than using random coefficients as the original oRF.

Decision Forests: A Unified Framework for Classification, Regression, Density Estimation, Manifold Learning and Semi-Supervised Learning

A unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision, and medical image analysis tasks is presented and relative advantages and disadvantages discussed.

Consistency of Random Forests

A step forward in forest exploration is taken by proving a consistency result for Breiman's original algorithm in the context of additive regression models, and sheds an interesting light on how random forests can nicely adapt to sparsity.

Neural Random Forests

This work reformulates the random forest method of Breiman (2001) into a neural network setting, and proposes two new hybrid procedures that are called neural random forests, which both predictors exploit prior knowledge of regression trees for their architecture.

Adaptive Concentration of Regression Trees, with Application to Random Forests

This approach breaks tree training into a model selection phase, followed by a model fitting phase where the best regression model consistent with these splits is found, and shows that the fitted regression tree concentrates around the optimal predictor with the same splits.

Random Forests

Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.

Structured class-labels in random forests for semantic image labelling

This work provides a way to incorporate structural information in the popular random forest framework for performing low-level, unary classification and provides two possibilities for integrating the structured output predictions into concise, semantic labellings.