• Corpus ID: 220496503

Towards Robust Classification with Deep Generative Forests

@article{Correia2020TowardsRC,
  title={Towards Robust Classification with Deep Generative Forests},
  author={Alvaro H. C. Correia and Robert Peharz and Cassio de Campos},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.05721}
}
Decision Trees and Random Forests are among the most widely used machine learning models, and often achieve state-of-the-art performance in tabular, domain-agnostic datasets. Nonetheless, being primarily discriminative models they lack principled methods to manipulate the uncertainty of predictions. In this paper, we exploit Generative Forests (GeFs), a recent class of deep probabilistic models that addresses these issues by extending Random Forests to generative models representing the full… 

Figures from this paper

References

SHOWING 1-10 OF 26 REFERENCES

Joints in Random Forests

It is demonstrated that DTs and RFs can naturally be interpreted as generative models, by drawing a connection to Probabilistic Circuits, a prominent class of tractable probabilistic models and leading to Generative Decision Trees (GeDTs) and Generative Forests (GeFs), a family of novel hybrid generative-discriminative models.

Aleatoric and Epistemic Uncertainty with Random Forests

It is shown how two general approaches for measuring the learner's aleatoric and epistemic uncertainty in a prediction can be instantiated with decision trees and random forests as learning algorithms in a classification setting.

On Calibration of Modern Neural Networks

It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks

This paper shows that GP hybrid deep networks, GPDNNs, (GPs on top of DNNs and trained end-to-end) inherit the nice properties of both GPs and DNNS and are much more robust to adversarial examples.

Understanding Random Forests: From Theory to Practice

The goal of this thesis is to provide an in-depth analysis of random forests, consistently calling into question each and every part of the algorithm, in order to shed new light on its learning capabilities, inner workings and interpretability.

A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks

A simple baseline that utilizes probabilities from softmax distributions is presented, showing the effectiveness of this baseline across all computer vision, natural language processing, and automatic speech recognition, and it is shown the baseline can sometimes be surpassed.

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits.

Towards Scalable and Robust Sum-Product Networks

This work proposes the addition of caches to the SPN nodes and shows how this memoisation technique reduces inference times in a range of experiments, and introduces class-selective SPNs, an architecture that is suited for classification tasks and enables efficient robustness computation in Credal SPNs.