Niloofar Yousefi

We don’t have enough information about this author to calculate their statistics. If you think this is an error let us know.
Learn More
We show a Talagrand-type of concentration inequality for MTL, using which we establish sharp excess risk bounds for Multi-Task Learning (MTL) in terms of distribution-and data-dependent versions of the Local Rademacher Complexity (LRC). We also give a new bound on the LRC for strongly convex hypothesis classes, which applies not only to MTL but also to the(More)
When faced with learning a set of interrelated tasks from a limited amount of usable data, learning each task independently may lead to poor generalization performance. Multi-Task Learning (MTL) exploits the latent relations between tasks and overcomes data scarcity limitations by co-learning all these tasks simultaneously to offer improved performance. We(More)
  • 1