• Publications
  • Influence
Domain Adaptation via Transfer Component Analysis
TLDR
This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components. Expand
Co-teaching: Robust training of deep neural networks with extremely noisy labels
TLDR
Empirical results on noisy versions of MNIST, CIFar-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models. Expand
Core Vector Machines: Fast SVM Training on Very Large Data Sets
TLDR
This paper shows that many kernel methods can be equivalently formulated as minimum enclosing ball (MEB) problems in computational geometry and obtains provably approximately optimal solutions with the idea of core sets, and proposes the proposed Core Vector Machine (CVM) algorithm, which can be used with nonlinear kernels and has a time complexity that is linear in m. Expand
The pre-image problem in kernel methods
  • J. Kwok, I. Tsang
  • Mathematics, Computer Science
  • IEEE Transactions on Neural Networks
  • 21 August 2003
TLDR
This paper addresses the problem of finding the pre-image of a feature vector in the feature space induced by a kernel and proposes a new method which directly finds the location of thePre-image based on distance constraints in thefeature space. Expand
Local features are not lonely – Laplacian sparse coding for image classification
TLDR
This paper proposes to use histogram intersection based kNN method to construct a Laplacian matrix, which can well characterize the similarity of local features, and incorporates it into the objective function of sparse coding to preserve the consistence in sparse representation of similar local features. Expand
Visual event recognition in videos by learning from web data
TLDR
A new aligned space-time pyramid matching method to measure the distances between two video clips, and a cross-domain learning method to learn an adapted classifier based on multiple base kernels and the prelearned average classifiers by minimizing both the structural risk functional and the mismatch between data distributions from two domains. Expand
Domain Transfer Multiple Kernel Learning
TLDR
Comprehensive experiments on three domain adaptation data sets demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods. Expand
Flexible Manifold Embedding: A Framework for Semi-Supervised and Unsupervised Dimension Reduction
TLDR
A unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points by modeling the mismatch between h(X) and F. Expand
Learning With Augmented Features for Supervised and Semi-Supervised Heterogeneous Domain Adaptation
TLDR
This paper proposes a novel method called Heterogeneous Feature Augmentation (HFA) based on SVM which can simultaneously learn the target classifier as well as infer the labels of unlabeled target samples and shows that the SHFA and HFA outperform the existing HDA methods. Expand
Learning with Augmented Features for Heterogeneous Domain Adaptation
TLDR
A new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions, and it is demonstrated that HFA outperforms the existing HDA methods. Expand
...
1
2
3
4
5
...