Corpus ID: 785634

# A Dirty Model for Multi-task Learning

@inproceedings{Jalali2010ADM,
title={A Dirty Model for Multi-task Learning},
author={A. Jalali and Pradeep Ravikumar and S. Sanghavi and Chao Ruan},
booktitle={NIPS},
year={2010}
}
We consider multi-task learning in the setting of multiple linear regression, and where some relevant features could be shared across the tasks. Recent research has studied the use of l1/lq norm block-regularizations with q > 1 for such block-sparse structured problems, establishing strong guarantees on recovery even under high-dimensional scaling where the number of features scale with the number of observations. However, these papers also caution that the performance of such block-regularized… Expand
331 Citations

#### Figures, Tables, and Topics from this paper

Wasserstein regularization for sparse multi-task regression
• Computer Science, Mathematics
• AISTATS
• 2019
• Computer Science, Mathematics
• ICML
• 2012
• Computer Science, Mathematics
• CIKM
• 2014
Efficient Output Kernel Learning for Multiple Tasks
• Computer Science, Mathematics
• NIPS
• 2015
Multivariate Scale mixtures for joint sparse regularization in multi-task learning
• Mathematics, Computer Science
• 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
• 2017
• Computer Science, Medicine
• KDD
• 2012
Representation Learning via Semi-Supervised Autoencoder for Multi-task Learning
• Computer Science
• 2015 IEEE International Conference on Data Mining
• 2015