• Publications
  • Influence
A Survey on Transfer Learning
TLDR
The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Domain Adaptation via Transfer Component Analysis
TLDR
This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.
Cross-domain sentiment classification via spectral feature alignment
TLDR
This work develops a general solution to sentiment classification when the authors do not have any labels in a target domain but have some labeled data in a different domain, regarded as source domain and proposes a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, with the help of domain-independent words as a bridge.
Domain Generalization with Adversarial Feature Learning
TLDR
This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework.
Transfer defect learning
TLDR
A state-of-the-art transfer learning approach is applied to make feature distributions in source and target projects similar, and a novel transfer defect learning approach, TCA+, is proposed, by extending TCA.
Adaptation Regularization: A General Framework for Transfer Learning
TLDR
A novel transfer learning framework, referred to as Adaptation Regularization based Transfer Learning (ARTL), to model adaptive classifiers in a unified way based on the structural risk minimization principle and the regularization theory, and can significantly outperform state-of-the-art learning methods on several public text and image datasets.
Transfer Learning via Dimensionality Reduction
TLDR
A new dimensionality reduction method is proposed to find a latent space, which minimizes the distance between distributions of the data in different domains in a latentspace, which can be treated as a bridge of transferring knowledge from the source domain to the target domain.
Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis
TLDR
A novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction and is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance.
Coupled Multi-Layer Attentions for Co-Extraction of Aspect and Opinion Terms
TLDR
A novel deep learning model, named coupled multi-layer attentions, where each layer consists of a couple of attentions with tensor operators that are learned interactively to dually propagate information between aspect terms and opinion terms.
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
TLDR
It is proved that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer, so there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance.
...
...