Domain Adaptation via Transfer Component Analysis
- Sinno Jialin Pan, I. Tsang, J. Kwok, Qiang Yang
- Computer ScienceIEEE Transactions on Neural Networks
- 11 July 2009
This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.
Core Vector Machines: Fast SVM Training on Very Large Data Sets
- I. Tsang, J. Kwok, Pak-Ming Cheung
- Computer ScienceJournal of machine learning research
- 1 December 2005
This paper shows that many kernel methods can be equivalently formulated as minimum enclosing ball (MEB) problems in computational geometry and obtains provably approximately optimal solutions with the idea of core sets, and proposes the proposed Core Vector Machine (CVM) algorithm, which can be used with nonlinear kernels and has a time complexity that is linear in m.
The pre-image problem in kernel methods
This paper addresses the problem of finding the pre-image of a feature vector in the feature space induced by a kernel and proposes a new method which directly finds the location of thePre-image based on distance constraints in thefeature space.
Generalizing from a Few Examples: A Survey on Few-Shot Learning
A thorough survey to fully understand Few-Shot Learning (FSL), and categorizes FSL methods from three perspectives: data, which uses prior knowledge to augment the supervised experience; model, which used to reduce the size of the hypothesis space; and algorithm, which using prior knowledgeto alter the search for the best hypothesis in the given hypothesis space.
Transfer Learning via Dimensionality Reduction
- Sinno Jialin Pan, J. Kwok, Qiang Yang
- Computer ScienceAAAI Conference on Artificial Intelligence
- 13 July 2008
A new dimensionality reduction method is proposed to find a latent space, which minimizes the distance between distributions of the data in different domains in a latentspace, which can be treated as a bridge of transferring knowledge from the source domain to the target domain.
Maximum Margin Clustering Made Practical
Experiments on a number of synthetic and real-world data sets demonstrate that the proposed approach is more accurate, much faster, and can handle data sets that are hundreds of times larger than the largest data set reported in the MMC literature.
Improved Nyström low-rank approximation and error analysis
An error analysis that directly relates the Nyström approximation quality with the encoding powers of the landmark points in summarizing the data is presented, and the resultant error bound suggests a simple and efficient sampling scheme, the k-means clustering algorithm, for NyStröm low-rank approximation.
Objective functions for training new hidden units in constructive neural networks
The aim is to derive a class of objective functions the computation of which and the corresponding weight updates can be done in O(N) time, where N is the number of training patterns.
Generalized Core Vector Machines
The center-constrained MEB problem is introduced and the generalized CVM algorithm is extended, which can now be used with any linear/nonlinear kernel and can also be applied to kernel methods such as SVR and the ranking SVM.
Constructive algorithms for structure learning in feedforward neural networks for regression problems
This survey paper first describes the general issues in constructive algorithms, with special emphasis on the search strategy, then presents a taxonomy, based on the differences in the state transition mapping, the training algorithm, and the network architecture.