• Corpus ID: 238531449

FOCUS: Familiar Objects in Common and Uncommon Settings

  title={FOCUS: Familiar Objects in Common and Uncommon Settings},
  author={Priyatham Kattakinda and Soheil Feizi},
Standard training datasets for deep learning often contain objects in common settings (e.g., “a horse on grass” or “a ship in water”) since they are usually collected by randomly scraping the web. Uncommon and rare settings (e.g., “a plane on water”, “a car in snowy weather”) are thus severely under-represented in the training data. This can lead to an undesirable bias in model predictions towards common settings and create a false sense of accuracy. In this paper, we introduce FOCUS (Familiar… 
Model-Based Domain Generalization
This paper proposes a novel approach for the domain generalization problem called Model-Based Domain Generalization, which uses unlabeled data from the training domains to learn multi-modal domain transformation models that map data from one training domain to any other domain.


Natural Adversarial Examples
This work introduces two challenging datasets that reliably cause machine learning model performance to substantially degrade and curates an adversarial out-of-distribution detection dataset called IMAGENET-O, which is the first out- of-dist distribution detection dataset created for ImageNet models.
Recognition in Terra Incognita
It is desirable for detection and classification algorithms to generalize to unfamiliar environments, but suitable benchmarks for quantitatively studying this phenomenon are not yet available. We
Deep Residual Learning for Image Recognition
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Do ImageNet Classifiers Generalize to ImageNet?
The results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.
Adam: A Method for Stochastic Optimization
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities.
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
A new scaling method is proposed that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient and is demonstrated the effectiveness of this method on scaling up MobileNets and ResNet.
Noise or Signal: The Role of Image Backgrounds in Object Recognition
This work creates a toolkit for disentangling foreground and background signal on ImageNet images, and finds that models can achieve non-trivial accuracy by relying on the background alone, and more accurate models tend to depend on backgrounds less.