Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
A Survey on Transfer Learning
The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Domain Adaptation via Transfer Component Analysis
- Sinno Jialin Pan, I. Tsang, J. Kwok, Qiang Yang
- Computer ScienceIEEE Transactions on Neural Networks
- 11 July 2009
This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.
Cross-domain sentiment classification via spectral feature alignment
- Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, Zheng Chen
- Computer ScienceWWW '10
- 26 April 2010
This work develops a general solution to sentiment classification when the authors do not have any labels in a target domain but have some labeled data in a different domain, regarded as source domain and proposes a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, with the help of domain-independent words as a bridge.
Domain Generalization with Adversarial Feature Learning
- Haoliang Li, Sinno Jialin Pan, Shiqi Wang, A. Kot
- Computer ScienceIEEE/CVF Conference on Computer Vision and…
- 1 June 2018
This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework.
Transfer defect learning
- Jaechang Nam, Sinno Jialin Pan, Sunghun Kim
- Engineering, Computer Science35th International Conference on Software…
- 18 May 2013
A state-of-the-art transfer learning approach is applied to make feature distributions in source and target projects similar, and a novel transfer defect learning approach, TCA+, is proposed, by extending TCA.
Adaptation Regularization: A General Framework for Transfer Learning
- Mingsheng Long, Jianmin Wang, Guiguang Ding, Sinno Jialin Pan, Philip S. Yu
- Computer ScienceIEEE Transactions on Knowledge and Data…
- 1 May 2014
A novel transfer learning framework, referred to as Adaptation Regularization based Transfer Learning (ARTL), to model adaptive classifiers in a unified way based on the structural risk minimization principle and the regularization theory, and can significantly outperform state-of-the-art learning methods on several public text and image datasets.
Transfer Learning via Dimensionality Reduction
A new dimensionality reduction method is proposed to find a latent space, which minimizes the distance between distributions of the data in different domains in a latentspace, which can be treated as a bridge of transferring knowledge from the source domain to the target domain.
Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis
A novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction and is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance.
Coupled Multi-Layer Attentions for Co-Extraction of Aspect and Opinion Terms
A novel deep learning model, named coupled multi-layer attentions, where each layer consists of a couple of attentions with tensor operators that are learned interactively to dually propagate information between aspect terms and opinion terms.
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
It is proved that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer, so there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance.