Learning Task Grouping and Overlap in Multi-task Learning


In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.

View Slides

Extracted Key Phrases

Citations per Year

192 Citations

Semantic Scholar estimates that this publication has 192 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Kumar2012LearningTG, title={Learning Task Grouping and Overlap in Multi-task Learning}, author={Abhishek Kumar and Hal Daum{\'e}}, booktitle={ICML}, year={2012} }