• Publications
  • Influence
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
TLDR
This paper proposes a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier, and obtains the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis.
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
TLDR
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets.
CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances
TLDR
A simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations, in addition to contrasting a given sample with other instances as in conventional Contrastive learning methods, contrasts the sample with distributionally-shifted augmentations of itself.
Network adiabatic theorem: an efficient randomized protocol for contention resolution
TLDR
This paper designs an algorithm building upon a Metropolis-Hastings sampling mechanism along with selection of `weight' as an appropriate function of the queue-size that is efficient for a network of queues where contention is modeled through independent-set constraints over the network graph.
Learning from Failure: Training Debiased Classifier from Biased Classifier
TLDR
This work intentionally train the first network to be biased by repeatedly amplifying its ''prejudice'', and debias the training of the second network by focusing on samples that go against the prejudice of the biased network in (a).
Regularizing Class-Wise Predictions via Self-Knowledge Distillation
TLDR
A new regularization method is proposed that penalizes the predictive distribution between similar samples during training and results in regularizing the dark knowledge of a single network by forcing it to produce more meaningful and consistent predictions in a class-wise manner.
Freeze Discriminator: A Simple Baseline for Fine-tuning GANs
TLDR
It is shown that simple fine-tuning of GANs with frozen lower layers of the discriminator performs surprisingly well, and a simple baseline, FreezeD, significantly outperforms previous techniques used in both unconditional and conditional GAns.
Neural Adaptive Content-aware Internet Video Delivery
TLDR
A new video delivery framework that utilizes client computation and recent advances in deep neural networks (DNNs) to reduce the dependency for delivering high-quality video and enhance the video quality independent to the available bandwidth is presented.
Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning
TLDR
A simple technique to improve a generalization ability of deep RL agents by introducing a randomized (convolutional) neural network that randomly perturbs input observations, which enables trained agents to adapt to new domains by learning robust features invariant across varied and randomized environments.
Robust Inference via Generative Classifiers for Handling Noisy Labels
TLDR
This work proposes a novel inference method, termed Robust Generative classifier (RoG), applicable to any discriminative neural classifier pre-trained on noisy datasets, and proves that RoG generalizes better than baselines under noisy labels.
...
1
2
3
4
5
...