We propose efficient algorithms to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the l1-norm, which has proven useful in several applications.Expand

We present an extension of sparse PCA, or sparse dictionary learning, where the sparsity patterns of all dictionary elements are structured and constrained to belong to a prespecified set of shapes.Expand

We propose to combine two approaches for modeling data admitting sparse representations: on the one hand, dictionary learning and structured sparsity.Expand

We show that the $\ell_1$-norm can then be extended to structured norms built on either disjoint or overlapping groups of variables, leading to a flexible framework that can deal with various structures.Expand

We consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of l∞-norms over groups of variables.Expand

We show that the proximal operator associated with a sum of l∞-norms can be computed exactly in polynomial time by solving a quadratic min cost flow problem, allowing the use of accelerated proximal gradient methods.Expand