Learning with Whom to Share in Multi-task Feature Learning


In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining “with whom” each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm monotonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature.

Extracted Key Phrases

5 Figures and Tables

Citations per Year

202 Citations

Semantic Scholar estimates that this publication has 202 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Kang2011LearningWW, title={Learning with Whom to Share in Multi-task Feature Learning}, author={Zhuoliang Kang and Kristen Grauman and Fei Sha}, booktitle={ICML}, year={2011} }