Residual Parameter Transfer for Deep Domain Adaptation

@article{Rozantsev2018ResidualPT,
  title={Residual Parameter Transfer for Deep Domain Adaptation},
  author={Artem Rozantsev and Mathieu Salzmann and Pascal V. Fua},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={4339-4348}
}
The goal of Deep Domain Adaptation is to make it possible to use Deep Nets trained in one domain where there is enough annotated training data in another where there is little or none. Most current approaches have focused on learning feature representations that are invariant to the changes that occur when going from one domain to the other, which means using the same network parameters in both domains. While some recent algorithms explicitly model the changes by adapting the network parameters… 

A Survey of Unsupervised Deep Domain Adaptation

TLDR
A survey will compare single-source and typically homogeneous unsupervised deep domain adaptation approaches, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially costly target data labels.

A Dictionary Approach to Domain-Invariant Learning in Deep Networks

TLDR
This paper shows for the first time, both empirically and theoretically, that domain shifts can be effectively handled by decomposing a convolutional layer into adomain-specific atom layer and a domain-shared coefficient layer, while both remain Convolutional.

Deep Domain Adaptation for Regression

TLDR
This chapter proposes a novel framework, DeepDAR, for domain adaptation for regression applications, using deep convolutional neural networks (CNNs) and formulate a loss function relevant to the research task and exploit the gradient descent algorithm to optimize the loss and train the deep CNN.

Learning to adapt class-specific features across domains for semantic segmentation

TLDR
This thesis designs a conditional pixel-wise discriminator network, whose output is conditioned on the segmentation masks, and adopts the recently introduced StarGAN architecture as image translation backbone, since it is able to perform translations across multiple domains by means of a single generator network.

Deep Visual Domain Adaptation

  • G. Csurka
  • Computer Science
    2020 22nd International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)
  • 2020
TLDR
This paper gives a comprehensive overview of deep domain adaptation methods for computer vision applications and gives pointers to papers that extend these ideas for other applications such as semantic segmentation, object detection, person re-identifications, and others.

Deep Adversarial Attention Alignment for Unsupervised Domain Adaptation: the Benefit of Target Expectation Maximization

TLDR
This paper proposes an attention alignment scheme on all the target convolutional layers to uncover the knowledge shared by the source domain, and estimates the posterior label distribution of the unlabeled data for target network training.

Domain-invariant Learning using Adaptive Filter Decomposition

TLDR
This paper shows for the first time, both empirically and theoretically, that domain shifts can be effectively handled by decomposing a regular convolutional layer into adomain-specific basis layer and a domain-shared basis coefficient layer, while both remain convolutionAL.

A New Method of Image Classification Based on Domain Adaptation

TLDR
A deep fuzzy domain adaptation (DFDA) that assigns different weights to samples of the same category in the source and target domains, which enhances the domain adaptive capabilities.

Unsupervised Visual Domain Adaptation: A Deep Max-Margin Gaussian Process Approach

TLDR
A more systematic and effective way to achieve hypothesis consistency using Gaussian processes (GP), which allows to induce a hypothesis space of classifiers from the posterior distribution of the latent random functions, turning the learning into a large-margin posterior separation problem, significantly easier to solve than previous approaches based on adversarial minimax optimization.

References

SHOWING 1-10 OF 32 REFERENCES

Beyond Sharing Weights for Deep Domain Adaptation

TLDR
This work introduces a two-stream architecture, where one operates in the source domain and the other in the target domain, and demonstrates that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings.

Learning Transferrable Representations for Unsupervised Domain Adaptation

TLDR
A unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation is proposed.

Unsupervised Domain Adaptation with Residual Transfer Networks

TLDR
Empirical evidence shows that the new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeledData in the target domain outperforms state of the art methods on standard domain adaptation benchmarks.

Learning Transferable Features with Deep Adaptation Networks

TLDR
A new Deep Adaptation Network (DAN) architecture is proposed, which generalizes deep convolutional neural network to the domain adaptation scenario and can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding.

Domain-Adversarial Training of Neural Networks

TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.

Associative Domain Adaptation

We propose associative domain adaptation, a novel technique for end-to-end domain adaptation with neural networks, the task of inferring class labels for an unlabeled target domain based on the

Deep Transfer Learning with Joint Adaptation Networks

TLDR
JAN is presented, which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion.

Deep Domain Confusion: Maximizing for Domain Invariance

TLDR
This work proposes a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant and shows that a domain confusion metric can be used for model selection to determine the dimension of an adaptationlayer and the best position for the layer in the CNN architecture.

Deep CORAL: Correlation Alignment for Deep Domain Adaptation

TLDR
This paper extends CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL), and shows state-of-the-art performance on standard benchmark datasets.

Simultaneous Deep Transfer Across Domains and Tasks

TLDR
This work proposes a new CNN architecture to exploit unlabeled and sparsely labeled target domain data and simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks.