#### Filter Results:

- Full text PDF available (166)

#### Publication Year

1995

2017

- This year (12)
- Last 5 years (60)
- Last 10 years (108)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Sinno Jialin Pan, Ivor W. Tsang, James T. Kwok, Qiang Yang
- IEEE Transactions on Neural Networks
- 2009

Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries… (More)

- Ivor W. Tsang, James T. Kwok, Pak-Ming Cheung
- Journal of Machine Learning Research
- 2005

Standard SVM training has O(m3) time and O(m2) space complexities, where m is the training set size. It is thus computationally infeasible on very large data sets. By observing that practical SVM implementations only approximate the optimal solution by an iterative strategy, we scale up kernel methods by exploiting such “approximateness” in this paper. We… (More)

- James T. Kwok, Ivor W. Tsang
- IEEE Transactions on Neural Networks
- 2003

In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method in which relies on nonlinear optimization, our proposed method… (More)

- Kai Zhang, Ivor W. Tsang, James T. Kwok
- IEEE Transactions on Neural Networks
- 2007

Maximum margin clustering (MMC) is a recent large margin unsupervised learning approach that has often outperformed conventional clustering methods. Computationally, it involves non-convex optimization and has to be relaxed to different semidefinite programs (SDP). However, SDP solvers are computationally very expensive and only small data sets can be… (More)

- Kai Zhang, Ivor W. Tsang, James T. Kwok
- ICML
- 2008

Low-rank matrix approximation is an effective tool in alleviating the memory and computational burdens of kernel methods and sampling, as the mainstream of such algorithms, has drawn considerable attention in both theory and practice. This paper presents detailed studies on the Nyström sampling scheme and in particular, an error analysis that directly… (More)

- Sinno Jialin Pan, James T. Kwok, Qiang Yang
- AAAI
- 2008

Transfer learning addresses the problem of how to utilize plenty of labeled data in a source domain to solve related but different problems in a target domain, even when the training and testing problems have different distributions or features. In this paper, we consider transfer learning via dimensionality reduction. To solve this problem, we learn a… (More)

- Ivor W. Tsang, András Kocsor, James T. Kwok
- ICML
- 2007

The core vector machine (CVM) is a recent approach for scaling up kernel methods based on the notion of minimum enclosing ball (MEB). Though conceptually simple, an efficient implementation still requires a sophisticated numerical solver. In this paper, we introduce the enclosing ball (EB) problem where the ball's radius is fixed and thus does not have to… (More)

- James T. Kwok, Dit-Yan Yeung
- IEEE Trans. Neural Networks
- 1997

In this paper, we study a number of objective functions for training new hidden units in constructive algorithms for multilayer feedforward networks. The aim is to derive a class of objective functions the computation of which and the corresponding weight updates can be done in O(N) time, where N is the number of training patterns. Moreover, even though… (More)

- James T. Kwok, Dit-Yan Yeung
- IEEE Trans. Neural Networks
- 1997

In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole problem as a state-space search, we first describe the… (More)

- Ivor W. Tsang, James T. Kwok, Jacek M. Zurada
- IEEE Transactions on Neural Networks
- 2006

Kernel methods, such as the support vector machine (SVM), are often formulated as quadratic programming (QP) problems. However, given m training patterns, a naive implementation of the QP solver takes O(m <sup>3</sup>) training time and at least O(m<sup>2</sup>) space. Hence, scaling up these QPs is a major stumbling block in applying kernel methods on very… (More)