Corpus ID: 184487540

SALT: Subspace Alignment as an Auxiliary Learning Task for Domain Adaptation

@article{Thopalli2019SALTSA,
  title={SALT: Subspace Alignment as an Auxiliary Learning Task for Domain Adaptation},
  author={Kowshik Thopalli and Jayaraman J. Thiagarajan and Rushil Anirudh and Pavan K. Turaga},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.04338}
}
Unsupervised domain adaptation aims to transfer and adapt knowledge learned from a labeled source domain to an unlabeled target domain. Key components of unsupervised domain adaptation include: (a) maximizing performance on the target, and (b) aligning the source and target domains. Traditionally, these tasks have either been considered as separate, or assumed to be implicitly addressed together with high-capacity feature extractors. When considered separately, alignment is usually viewed as a… Expand
Meta Auxiliary Learning for Facial Action Unit Detection
  • Yong Li, S. Shan
  • Computer Science
  • ArXiv
  • 2021
TLDR
A Meta Auxiliary Learning method (MAL) is proposed that automatically selects highly related FE samples by learning adaptative weights for the training FE samples in a meta learning manner that consistently improves the AU detection performance compared with the state-of-the-art multi-task and auxiliary learning methods. Expand

References

SHOWING 1-10 OF 51 REFERENCES
Subspace Distribution Alignment for Unsupervised Domain Adaptation
TLDR
A unified view of existing subspace mapping based methods is presented and a generalized approach that also aligns the distributions as well as the subspace bases is developed that shows improved results over published approaches. Expand
A DIRT-T Approach to Unsupervised Domain Adaptation
TLDR
Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation. Expand
Domain adaptation for object recognition: An unsupervised approach
TLDR
This paper presents one of the first studies on unsupervised domain adaptation in the context of object recognition, where data has been labeled only from the source domain (and therefore do not have correspondences between object categories across domains). Expand
DeepJDOT: Deep Joint distribution optimal transport for unsupervised domain adaptation
TLDR
Through a measure of discrepancy on joint deep representations/labels based on optimal transport, this work not only learns new data representations aligned between the source and target domain, but also simultaneously preserve the discriminative information used by the classifier. Expand
Optimal Transport for Domain Adaptation
TLDR
A regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains, that consistently outperforms state of the art approaches and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain. Expand
Correlation Alignment for Unsupervised Domain Adaptation
TLDR
This chapter describes a solution that applies a linear transformation to source features to align them with target features before classifier training, and proposes to equivalently apply CORAL to the classifier weights, leading to added efficiency when the number of classifiers is small but the number and dimensionality of target examples are very high. Expand
Unsupervised Domain Adaptation by Backpropagation
TLDR
The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets. Expand
Unsupervised domain adaptation using parallel transport on Grassmann manifold
TLDR
A novel framework based on the parallel transport of union of the source subspaces on the Grassmann manifold is developed, which allows for multiple domain shifts between the source and target domains. Expand
Domain Adaptation via Transfer Component Analysis
TLDR
This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components. Expand
Multiple Subspace Alignment Improves Domain Adaptation
TLDR
A novel unsupervised domain adaptation method to effectively represent the source and target datasets via a collection of low-dimensional subspaces, and subsequently align them by exploiting the natural geometry of the space of subspace, on the Grassmann manifold is presented. Expand
...
1
2
3
4
5
...