• Corpus ID: 247749102

Deep discriminative to kernel generative modeling

  title={Deep discriminative to kernel generative modeling},
  author={Jayanta Dey and Will LeVine and Ashwin De Silva and Ali Geisa and Jong M. Shin and Haoyin Xu and Tiffany Chu and Leyla Isik and Joshua T. Vogelstein},
. The fight between discriminative versus generative goes deep, in both the study of artificial and natural intelligence. In our view, both camps have complementary value, so, we sought to synergistic combine them. Here, we propose a methodology to convert deep discriminative networks to kernel generative networks. We leveraged the fact that deep models, including both random forests and deep networks, learn internal representations which are unions of polytopes with affine activation functions to… 

Figures and Tables from this paper



Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem

A new robust optimization technique similar to adversarial training is proposed which enforces low confidence predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training.

Towards a theory of out-of-distribution learning

This work introduces learning efficiency to quantify the amount a learner is able to leverage data for a given problem, regardless of whether it is an inor out-of-distribution problem, and proves the relationship between various generalized notions of learnability.

When are Deep Networks really better than Decision Forests at small sample sizes, and how?

Conceptually, it is illustrated that deep networks and decision forests can be profitably viewed as “partition and vote” schemes, whereas deep nets performed better on structured data with larger sample sizes, suggesting that further gains in both scenarios may be realized via further combining aspects of forests and networks.

Provably Robust Detection of Out-of-distribution Data (almost) for free

This paper proposes a novel method where from first principles it is proposed to combine a certifiable OOD detector with a standard classifier into an OOD aware classifier that provably avoids the asymptotic overconfidence problem of standard neural networks.

Unsupervised out-of-distribution detection using kernel density estimation

An unsupervised OOD detection method that can work with both classification and non-classification networks by using kernel density estimation (KDE) is proposed and achieves competitive results to the state-of-the-art in classification networks and leads to improvement on segmentation network.

Representation Ensembling for Synergistic Lifelong Learning with Quasilinear Complexity

This work proposes two algorithms: representation ensembles of (1) trees and (2) networks, which demonstrate both forward and backward transfer in a variety of simulated and real data scenarios, including tabular, image, and spoken, and adversarial tasks.

Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks

It is shown that a sufficient condition for a calibrated uncertainty on a ReLU network is "to be a bit Bayesian", which validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off.

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

This study uses 11 state-of-the-art neural network models trained on 3 image datasets of varying complexity to demonstrate the success of open-world evasion attacks, where adversarial examples are generated from out- of-distribution inputs (OOD adversarialExamples).

Deep Anomaly Detection with Outlier Exposure

In extensive experiments on natural language processing and small- and large-scale vision tasks, it is found that Outlier Exposure significantly improves detection performance and that cutting-edge generative models trained on CIFar-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; OE is used to mitigate this issue.

Do Deep Generative Models Know What They Don't Know?

The density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses from those of house numbers, and such behavior persists even when the flows are restricted to constant-volume transformations.