Parameter-free Online Test-time Adaptation

@article{Boudiaf2022ParameterfreeOT,
  title={Parameter-free Online Test-time Adaptation},
  author={Malik Boudiaf and Romain Mueller and Ismail Ben Ayed and Luca Bertinetto},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022},
  pages={8334-8343}
}
Training state-of-the-art vision models has become prohibitively expensive for researchers and practitioners. For the sake of accessibility and resource reuse, it is important to focus on adapting these models to a variety of down-stream scenarios. An interesting and practical paradigm is online test-time adaptation, according to which training data is inaccessible, no labelled data from the test distribution is available, and adaptation can only happen at test time and on a handful of samples… 

Robust Continual Test-time Adaptation: Instance-aware BN and Prediction-balanced Memory

TLDR
This work presents a new test-time adaptation scheme that is robust against noni.i.d. test data streams, and demonstrates that the proposed robust TTA not only outperforms state-of-the-art TTA algorithms in the non-i.I.D. setting, but also achieves comparable performance to those algorithms under the i.

References

SHOWING 1-10 OF 64 REFERENCES

Self-supervised Test-time Adaptation on Video Data

TLDR
This paper explores whether the recent progress in test-time adaptation in the image domain and self-supervised learning can be lever-aged to adapt a model to previously unseen and unlabelled videos presenting both mild (but arbitrary) and severe covariate shifts.

A Survey of Unsupervised Deep Domain Adaptation

TLDR
A survey will compare single-source and typically homogeneous unsupervised deep domain adaptation approaches, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially costly target data labels.

Towards Inheritable Models for Open-Set Domain Adaptation

TLDR
This work formalizes knowledge inheritability as a novel concept and proposes a simple yet effective solution to realize inheritable models suitable for the above practical DA paradigm.

Universal Source-Free Domain Adaptation

TLDR
A novel two-stage learning process is proposed with superior DA performance even over state-of-the-art source-dependent approaches, utilizing a novel instance-level weighting mechanism, named as Source Similarity Metric (SSM).

Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift

TLDR
It is shown that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness and combining the two further improves performance, and has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.

Model Adaptation: Unsupervised Domain Adaptation Without Source Data

TLDR
This paper proposes a new framework, which is referred to as collaborative class conditional generative adversarial net, to bypass the dependence on the source data and achieves superior performance on multiple adaptation tasks with only unlabeled target data, which verifies its effectiveness in this challenging setting.

Tent: Fully Test-Time Adaptation by Entropy Minimization

TLDR
Tent reduces generalization error for image classification on corrupted ImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on ImageNet-C, and optimize the model for confidence as measured by the entropy of its predictions.

Unsupervised Domain Adaptation by Backpropagation

TLDR
The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.

In Search of Lost Domain Generalization

TLDR
This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.
...