On Representer Theorems and Convex Regularization

@article{Boyer2019OnRT,
  title={On Representer Theorems and Convex Regularization},
  author={Claire Boyer and A. Chambolle and Yohann de Castro and Vincent Duval and Fr{\'e}d{\'e}ric de Gournay and Pierre Weiss},
  journal={ArXiv},
  year={2019},
  volume={abs/1806.09810}
}
We establish a general principle which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms. These atoms are identified with the extreme points and elements of the extreme rays of the regularizer level sets. An extension to a broader class of quasi-convex regularizers is also discussed. As a side result, we characterize the minimizers of the total gradient variation, which was still an unresolved problem. 

Figures from this paper

Convex Regularization and Representer Theorems

TLDR
It is established that regularizing an inverse problem with the gauge of a convex set C yields solutions which are linear combinations of a few extreme points or elements of the extreme rays of C, which can be understood as the atoms of the regularizer.

Extremal points and sparse optimization for generalized Kantorovich-Rubinstein norms

A precise characterization of the extremal points of sublevel sets of nonsmooth penalties provides both detailed information about minimizers, and optimality conditions in general classes of

Convex optimization in sums of Banach spaces

Sparse optimization on measures with over-parameterized gradient descent

TLDR
This work shows that this problem can be solved by discretizing the measure and running non-convex gradient descent on the positions and weights of the particles, which leads to a global optimization algorithm with a complexity scaling as log (1/ ϵ) in the desired accuracy.

Linear convergence of accelerated generalized conditional gradient methods

We propose an accelerated generalized conditional gradient method (AGCG) for the minimization of the sum of a smooth, convex loss function and a convex one-homogeneous regularizer over a Banach

Iterative Discretization of Optimization Problems Related to Superresolution

  • Axel FlinthP. Weiss
  • Mathematics
    2019 13th International conference on Sampling Theory and Applications (SampTA)
  • 2019
We study an iterative discretization algorithm for solving optimization problems regularized by the total variation norm over the space $\mathcal{M}\left( \Omega \right)$ of Radon measures on a

Regularized Learning in Banach Spaces

TLDR
This article presents a different way to study the theory of regularized learning for generalized data including representer theorems and convergence theoresms and shows how the existence and convergence of the approximate solutions are guaranteed by the weak* topology.

On the linear convergence rates of exchange and continuous methods for total variation minimization

TLDR
It is proved that continuously optimizing the amplitudes of positions of the target measure will succeed at a linear rate with a good initialization, and it is proposed to combine the two approaches into an alternating method.
...

References

SHOWING 1-10 OF 55 REFERENCES

Extreme point inequalities and geometry of the rank sparsity ball

TLDR
A calculus (or algebra) of faces for general convex functions is developed, yielding a simple and unified approach for deriving inequalities balancing the various features of the optimization problem at hand, at the extreme points of the solution set.

Intersecting singularities for multi-structured estimation

TLDR
By analyzing theoretical properties of this family of regularizers, a new complexity index and a convex penalty approximating it are suggested, which come up with oracle inequalities and compressed sensing results ensuring the quality of the regularized estimator.

ON DUALITY THEORY OF CONIC LINEAR PROBLEMS

In this paper we discuss duality theory of optimization problems with a linear objective function and subject to linear constraints with cone inclusions, referred to as conic linear problems. We

Exact solutions of infinite dimensional total-variation regularized problems

We study the solutions of infinite dimensional inverse problems over Banach spaces. The regularizer is defined as the total variation of a linear mapping of the function to recover, while the data

Local Strong Homogeneity of a Regularized Estimator

This paper deals with regularized pointwise estimation of discrete signals which contain large strongly homogeneous zones, where typically they are constant, or linear, or more generally satisfy a

On a theorem of Dubins

The Convex Geometry of Linear Inverse Problems

TLDR
This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems.

Representer Theorems for Sparsity-Promoting $\ell _{1}$ Regularization

TLDR
The main outcome of the investigation is that the use of l1 regularization is much more favorable for injecting prior knowledge: it results in a functional form that is independent of the system matrix, while this is not so in the l2 scenario.

Convex analysis and minimization algorithms

IX. Inner Construction of the Subdifferential.- X. Conjugacy in Convex Analysis.- XI. Approximate Subdifferentials of Convex Functions.- XII. Abstract Duality for Practitioners.- XIII. Methods of

Continuous-Domain Solutions of Linear Inverse Problems With Tikhonov Versus Generalized TV Regularization

TLDR
The parametric form of the solution (representer theorems) is derived for Tikhonov (quadratic) and generalized total-variation (gTV) regularizations and it is shown that, in both cases, the solutions are splines that are intimately related to the regularization operator.
...