• Publications
  • Influence
Learning Transferable Features with Deep Adaptation Networks
TLDR
We propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. Expand
  • 1,734
  • 331
  • PDF
Transfer Feature Learning with Joint Distribution Adaptation
TLDR
We put forward a novel transfer learning solution, referred to as Joint Distribution Adaptation (JDA), to jointly adapt both the marginal and conditional distributions in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Expand
  • 650
  • 174
  • PDF
Deep Transfer Learning with Joint Adaptation Networks
TLDR
We present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Expand
  • 698
  • 132
  • PDF
Unsupervised Domain Adaptation with Residual Transfer Networks
TLDR
We propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeledData in the target domain. Expand
  • 624
  • 93
  • PDF
Conditional Adversarial Domain Adaptation
TLDR
We present conditional adversarial domain adaptation, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions to guarantee the transferability. Expand
  • 347
  • 93
  • PDF
Transfer Joint Matching for Unsupervised Domain Adaptation
TLDR
We propose a novel domain adaptation solution, referred to as Transfer Joint Matching (TJM), to jointly perform feature matching and instance reweighting across domains in a principled dimensionality reduction procedure. Expand
  • 357
  • 84
  • PDF
Deep Hashing Network for Efficient Similarity Retrieval
TLDR
We propose a novel Deep Hashing Network (DHN) architecture for supervised hashing, in which we jointly learn good image representation tailored to hash coding and formally control the quantization error. Expand
  • 312
  • 70
  • PDF
HashNet: Deep Learning to Hash by Continuation
TLDR
We present HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. Expand
  • 237
  • 69
  • PDF
Adaptation Regularization: A General Framework for Transfer Learning
TLDR
We propose a novel transfer learning framework, referred to as Adaptation Regularization based Transfer Learning (ARTL), to model them in a unified way based on the structural risk minimization principle and the regularization theory. Expand
  • 293
  • 53
  • PDF
Partial Adversarial Domain Adaptation
TLDR
We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes and promotes positive transfer by matching the feature distributions in the shared label space. Expand
  • 107
  • 39
  • PDF