• Corpus ID: 240353692

Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations: late-stopping, tuning batch normalization and invariance loss

  title={Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations: late-stopping, tuning batch normalization and invariance loss},
  author={A. Sakai and Taro Sunagawa and Spandan Madan and Kanata Suzuki and Takashi Katoh and Hiromichi Kobashi and Hanspeter Pfister and Pawan Sinha and Xavier Boix and Tomotake Sasaki},
The training data distribution is often biased towards objects in certain orientations and illumination conditions. While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations and illuminations, Deep Neural Networks (DNNs) severely suffer in this case, even when large amounts of training examples are available. In this paper, we investigate three different approaches to improve DNNs in recognizing objects in OoD orientations and illuminations… 
What makes domain generalization hard?
A benchmark with 15 photo-realistic domains with the same geometry, scene layout and camera parameters as the popular 3D ScanNet dataset, but with controlled domain shifts in lighting, materials, and viewpoints, which outperforms existing domain generalization methods by over an 18% margin.


iLab-20M: A Large-Scale Controlled Object Dataset to Investigate Deep Learning
  • A. Borji, S. Izadi, L. Itti
  • Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
This work introduces a large-scale synthetic dataset, which is freely and publicly available, and uses it to answer several fundamental questions regarding selectivity and invariance properties of convolutional neural networks.
When and how CNNs generalize to out-of-distribution category-viewpoint combinations
It is shown that increasing the number of in-distribution combinations substantially improves generalization to OOD combinations, even with the same amount of training data, and that such OOD generalization is facilitated by the neural mechanism of specialization.
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
It is found that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work.
Fishr: Invariant Gradient Variances for Out-of-distribution Generalization
This paper introduces a new regularization — named Fishr — that enforces domain invariance in the space of the gradients of the loss that improves the state of the art on the DomainBed benchmark and performs consistently better than Empirical Risk Minimization.
RandAugment: Practical data augmentation with no separate search
RandAugment can be used uniformly across different tasks and datasets and works out of the box, matching or surpassing all previous learned augmentation approaches on CIFAR-10, CIFar-100, SVHN, and ImageNet.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Is Robustness To Transformations Driven by Invariant Neural Representations?
The results with state-of-the-art DCNNs indicate that invariant representations strengthen as the number of transformed categories in the training set is increased, and are much more prominent with local transformations such as blurring and high-pass filtering, compared to geometric transformations that entail changes in the spatial arrangement of the object.
Deep Residual Learning for Image Recognition
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Do CIFAR-10 Classifiers Generalize to CIFAR-10?
This work measures the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images and finds a large drop in accuracy for a broad range of deep learning models.
ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
A highly automated platform that enables gathering datasets with controls at scale using automated tools throughout machine learning to generate datasets that exercise models in new ways thus providing valuable feedback to researchers is developed.