Multi-task Multiple Kernel Learning

@inproceedings{Jawanpuria2011MultitaskMK,
  title={Multi-task Multiple Kernel Learning},
  author={Pratik Jawanpuria and SakethaNath Jagarlapudi},
  booktitle={SDM},
  year={2011}
}
This paper presents two novel formulations for learning shared feature representations across multiple tasks. The idea is to pose the problem as that of learning a shared kernel, which is constructed from a given set of base kernels, leading to improved generalization in all the tasks. The first formulation employs a (l1, lp), p ≥ 2 mixed norm regularizer promoting sparse combinations of the base kernels and unequal weightings across tasks — enabling the formulation to work with unequally… Expand
Multi-Task Multiple Kernel Relationship Learning
TLDR
A novel multitask multiple kernel learning framework that efficiently learns the kernel weights leveraging the relationship across multiple tasks, and an alternating minimization algorithm to learn the model parameters, kernel weights and task relationship matrix is proposed. Expand
Learning Kernels for Multiple Predictive Tasks
TLDR
This thesis presents a family of regularized risk minimization based convex formulations, of increasing generality, for learning features (kernels) in various settings involving multiple tasks and proposes a mixed-norm based formulation for learning the shared kernel as well as the prediction functions of all the tasks. Expand
Structured low-rank matrix learning: algorithms and applications
TLDR
This work considers the problem of learning a low-rank matrix, constrained to lie in a linear subspace, and introduces a novel factorization for modeling such matrices, and formulate the optimization problem on the Riemannian spectrahedron manifold. Expand
Task-dependent multi-task multiple kernel learning for facial action unit detection
TLDR
A novel method, task-dependent multi-task multiple kernel learning (TD-MTMKL), to jointly detect the absence and presence of multiple AUs and captures commonalities and adapts to variations among co-occurred AUs. Expand
A Riemannian gossip approach to decentralized subspace learning on Grassmann manifold
TLDR
The proposed modeling approach allows local subspace learning by different agents while achieving asymptotic consensus on the global learned subspace and the resulting approach is scalable and parallelizable. Expand
Sparse Multi-kernel Based Multi-task Learning for Joint Prediction of Clinical Scores and Biomarker Identification in Alzheimer's Disease
TLDR
This work proposes a multi-kernel based multi-task learning with a mixed sparsity-inducing norm to better capture the complex relationship between the cognitive scores and the neuroimaging measures. Expand
LMKL-Net: A Fast Localized Multiple Kernel Learning Solver via Deep Neural Networks
TLDR
Overall LMKL-Net can not only outperform the state-of-the-art MKL solvers in terms of accuracy, but also be trained about two orders of magnitude faster with much smaller memory footprint for large-scale learning. Expand
A Nonmetric Embedding Approach to Testing for Matched Pairs
We consider the problem of matched pair hypothesis testing, i.e., the problem of determining whether or not a pair of disparate feature vectors correspond to a common object. Our approach relies onExpand
A Riemannian approach for structured low-rank matrix learning
TLDR
This work considers the problem of learning a low-rank matrix, constrained to lie in a linear subspace, and introduces a novel factorization for modeling such matrices, and formulate the optimization problem on the Riemannian spectrahedron manifold. Expand

References

SHOWING 1-10 OF 61 REFERENCES
Convex multi-task feature learning
TLDR
It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. Expand
Clustered Multi-Task Learning: A Convex Formulation
TLDR
A new spectral norm is designed that encodes this a priori assumption that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors, resulting in a new convex optimization formulation for multi-task learning. Expand
On the Algorithmics and Applications of a Mixed-norm based Kernel Learning Formulation
TLDR
Results on real-world datasets show that the new MKL formulation is well-suited for object categorization tasks and that the MD based algorithm outperforms state-of-the-art MKL solvers like simpleMKL in terms of computational effort. Expand
Heterogeneous multitask learning with joint sparsity constraints
TLDR
This paper considers the problem of learning multiple related tasks of predicting both continuous and discrete outputs from a common set of input variables that lie in a high-dimensional feature space, and formulates this problem as a combination of linear regressions and logistic regressions. Expand
A convex formulation for learning shared structures from multiple tasks
TLDR
An improved formulation for multi-task learning based on the non-convex alternating structure optimization (ASO) algorithm, in which all tasks are related by a shared feature representation, is presented. Expand
Inferring latent task structure for Multitask Learning by Multiple Kernel Learning
TLDR
A novel approach to Multitask Learning that is capable of learning task similarity along with the classifiers and is able to identify task similarities in absence of such prior information is presented. Expand
Y.: SimpleMKL
Multiple kernel learning aims at simultaneously learning a kernel and the associated predictor in supervised learning settings. For the support vector machine, an efficient and general multipleExpand
l p -Norm Multiple Kernel Learning
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernelExpand
Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning
  • F. Bach
  • Computer Science, Mathematics
  • NIPS
  • 2008
TLDR
The extensive simulations on synthetic datasets and datasets from the UCI repository show that efficiently exploring the large feature space through sparsity-inducing norms leads to state-of-the-art predictive performance. Expand
Large Scale Multiple Kernel Learning
TLDR
It is shown that the proposed multiple kernel learning algorithm can be rewritten as a semi-infinite linear program that can be efficiently solved by recycling the standard SVM implementations, and generalize the formulation and the method to a larger class of problems, including regression and one-class classification. Expand
...
1
2
3
4
5
...