• Corpus ID: 239016895

MEMO: Test Time Robustness via Adaptation and Augmentation

@article{Zhang2021MEMOTT,
  title={MEMO: Test Time Robustness via Adaptation and Augmentation},
  author={Marvin Zhang and Sergey Levine and Chelsea Finn},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.09506}
}
While deep neural networks can attain good accuracy on in-distribution test points, many applications require robustness even in the face of unexpected perturbations in the input, changes in the domain, or other sources of distribution shift. We study the problem of test time robustification, i.e., using the test input to improve model robustness. Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions, such as access to multiple… 

Figures and Tables from this paper

Efficient Test-Time Model Adaptation without Forgetting
TLDR
An active sample selection criterion is proposed to identify reliable and non-redundant samples, on which the model is updated to minimize the entropy loss for test-time adaptation, and a Fisher regularizer is introduced to constrain important model parameters from drastic changes.
SITA: Single Image Test-time Adaptation
TLDR
A novel approach AugBN is proposed for the SITA setting that requires only forward propagation and is able to achieve significant performance gains compared to directly applying the source model on the target instances, as reflected in extensive experiments and ablation studies.
Re-using Adversarial Mask Discriminators for Test-time Training under Distribution Shifts
TLDR
It is argued that training stable discriminators produces expressive loss functions that the authors can re-use at inference to detect and correct segmentation mistakes and open new research avenues for re-using adversarial discriminators at inference.
Domain Generalization: A Survey
TLDR
For the first time, a comprehensive literature review in DG is provided to summarize the developments over the past decade and conduct a thorough review into existing methods and theories.
TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision
—Nowadays, deep neural networks outperform hu- mans in many tasks. However, if the input distribution drifts away from the one used in training, their performance drops significantly. Recently
Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification
The reliability of neural networks is essential for their use in safety-critical applications. Existing approaches generally aim at improving the robustness of neural networks to either real-world
Continual Test-Time Domain Adaptation
TLDR
A continual test-time adaptation approach (CoTTA), which proposes to stochastically restore a small part of the neurons to the source pre-trained weights during each iteration to help preserve source knowledge in the long-term and demonstrates the effectiveness of the approach on four classification tasks and a segmentation task.

References

SHOWING 1-10 OF 54 REFERENCES
Adaptive Risk Minimization: Learning to Adapt to Domain Shift
TLDR
This work considers the problem setting of domain generalization, and introduces the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization
TLDR
A new algorithm for domain generalization (DG), test-time template adjuster (T3A), aiming to robustify a model to unknown distribution shift, which stably improves performance on unseen domains across choices of backbone networks, and outperforms existingdomain generalization methods.
Revisiting Batch Normalization For Practical Domain Adaptation
TLDR
This paper proposes a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN, and demonstrates that the method is complementary with other existing methods and may further improve model performance.
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
TLDR
AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.
TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?
TLDR
A test-time feature alignment strategy utilizing offline feature summarization and online moment matching, which regularizes adaptation without revisiting training data is introduced, which indicates that storing and exploiting extra information, in addition to model parameters, can be a promising direction towards robust test- time adaptation.
Be Like Water: Robustness to Extraneous Variables Via Adaptive Feature Normalization
TLDR
It is demonstrated that estimating the feature statistics adaptively during inference, as in instance normalization, addresses this issue, producing normalized features that are more robust to changes in the extraneous variables.
A Fourier Perspective on Model Robustness in Computer Vision
TLDR
AutoAugment, a recently proposed data augmentation policy optimized for clean accuracy, achieves state-of-the-art robustness on the CIFAR-10-C benchmark and is observed to use a more diverse set of augmentations than previously observed.
Aggregated Residual Transformations for Deep Neural Networks
TLDR
On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity.
Towards Robust Vision Transformer
TLDR
This work proposes Robust Vision Transformer (RVT), which is a new vision transformer and has superior performance with strong robustness, and proposes two new plug-and-play techniques called positionaware attention scaling and patch-wise augmentation to augment the RVT, which is abbreviate as RVT∗.
In Search of Lost Domain Generalization
TLDR
This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.
...
1
2
3
4
5
...