• Publications
  • Influence
Domain Adaptation via Transfer Component Analysis
TLDR
This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components. Expand
Core Vector Machines: Fast SVM Training on Very Large Data Sets
TLDR
This paper shows that many kernel methods can be equivalently formulated as minimum enclosing ball (MEB) problems in computational geometry and obtains provably approximately optimal solutions with the idea of core sets, and proposes the proposed Core Vector Machine (CVM) algorithm, which can be used with nonlinear kernels and has a time complexity that is linear in m. Expand
The pre-image problem in kernel methods
  • J. Kwok, I. Tsang
  • Mathematics, Computer Science
  • IEEE Transactions on Neural Networks
  • 21 August 2003
TLDR
This paper addresses the problem of finding the pre-image of a feature vector in the feature space induced by a kernel and proposes a new method which directly finds the location of thePre-image based on distance constraints in thefeature space. Expand
Transfer Learning via Dimensionality Reduction
TLDR
A new dimensionality reduction method is proposed to find a latent space, which minimizes the distance between distributions of the data in different domains in a latentspace, which can be treated as a bridge of transferring knowledge from the source domain to the target domain. Expand
Objective functions for training new hidden units in constructive neural networks
  • J. Kwok, D. Yeung
  • Computer Science, Medicine
  • IEEE Trans. Neural Networks
  • 1 September 1997
TLDR
The aim is to derive a class of objective functions the computation of which and the corresponding weight updates can be done in O(N) time, where N is the number of training patterns. Expand
Improved Nyström low-rank approximation and error analysis
TLDR
An error analysis that directly relates the Nyström approximation quality with the encoding powers of the landmark points in summarizing the data is presented, and the resultant error bound suggests a simple and efficient sampling scheme, the k-means clustering algorithm, for NyStröm low-rank approximation. Expand
Generalized Core Vector Machines
TLDR
The center-constrained MEB problem is introduced and the generalized CVM algorithm is extended, which can now be used with any linear/nonlinear kernel and can also be applied to kernel methods such as SVR and the ranking SVM. Expand
Constructive algorithms for structure learning in feedforward neural networks for regression problems
  • J. Kwok, D. Yeung
  • Computer Science, Medicine
  • IEEE Trans. Neural Networks
  • 1 May 1997
TLDR
This survey paper first describes the general issues in constructive algorithms, with special emphasis on the search strategy, then presents a taxonomy, based on the differences in the state transition mapping, the training algorithm, and the network architecture. Expand
Maximum Margin Clustering Made Practical
TLDR
Experiments on a number of synthetic and real-world data sets demonstrate that the proposed approach is more accurate, much faster, and can handle data sets that are hundreds of times larger than the largest data set reported in the MMC literature. Expand
Clustered Nyström Method for Large Scale Manifold Learning and Dimension Reduction
  • Kai Zhang, J. Kwok
  • Mathematics, Computer Science
  • IEEE Transactions on Neural Networks
  • 1 October 2010
TLDR
The (non-probabilistic) error analysis justifies a “clustered Nyström method” that uses the k-means clustering centers as landmark points and can be applied to scale up a wide variety of algorithms that depend on the eigenvalue decomposition of kernel matrix. Expand
...
1
2
3
4
5
...