# On Representer Theorems and Convex Regularization

@article{Boyer2019OnRT, title={On Representer Theorems and Convex Regularization}, author={Claire Boyer and A. Chambolle and Yohann de Castro and Vincent Duval and Fr{\'e}d{\'e}ric de Gournay and Pierre Weiss}, journal={ArXiv}, year={2019}, volume={abs/1806.09810} }

We establish a general principle which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms. These atoms are identified with the extreme points and elements of the extreme rays of the regularizer level sets. An extension to a broader class of quasi-convex regularizers is also discussed. As a side result, we characterize the minimizers of the total gradient variation, which was still an unresolved problem.

## 64 Citations

### Convex Regularization and Representer Theorems

- MathematicsArXiv
- 2018

It is established that regularizing an inverse problem with the gauge of a convex set C yields solutions which are linear combinations of a few extreme points or elements of the extreme rays of C, which can be understood as the atoms of the regularizer.

### Extremal points and sparse optimization for generalized Kantorovich-Rubinstein norms

- Mathematics
- 2022

A precise characterization of the extremal points of sublevel sets of nonsmooth penalties provides both detailed information about minimizers, and optimality conditions in general classes of…

### Sparse optimization on measures with over-parameterized gradient descent

- Computer ScienceMath. Program.
- 2022

This work shows that this problem can be solved by discretizing the measure and running non-convex gradient descent on the positions and weights of the particles, which leads to a global optimization algorithm with a complexity scaling as log (1/ ϵ) in the desired accuracy.

### Linear convergence of accelerated generalized conditional gradient methods

- Mathematics
- 2021

We propose an accelerated generalized conditional gradient method (AGCG) for the minimization of the sum of a smooth, convex loss function and a convex one-homogeneous regularizer over a Banach…

### Atomic norm minimization for decomposition into complex exponentials and optimal transport in Fourier domain

- MathematicsJ. Approx. Theory
- 2020

### Energy on spheres and discreteness of minimizing measures

- MathematicsJournal of Functional Analysis
- 2021

### Iterative Discretization of Optimization Problems Related to Superresolution

- Mathematics2019 13th International conference on Sampling Theory and Applications (SampTA)
- 2019

We study an iterative discretization algorithm for solving optimization problems regularized by the total variation norm over the space $\mathcal{M}\left( \Omega \right)$ of Radon measures on a…

### Regularized Learning in Banach Spaces

- MathematicsArXiv
- 2021

This article presents a different way to study the theory of regularized learning for generalized data including representer theorems and convergence theoresms and shows how the existence and convergence of the approximate solutions are guaranteed by the weak* topology.

### On the linear convergence rates of exchange and continuous methods for total variation minimization

- MathematicsMath. Program.
- 2021

It is proved that continuously optimizing the amplitudes of positions of the target measure will succeed at a linear rate with a good initialization, and it is proposed to combine the two approaches into an alternating method.

## References

SHOWING 1-10 OF 55 REFERENCES

### Extreme point inequalities and geometry of the rank sparsity ball

- MathematicsMath. Program.
- 2015

A calculus (or algebra) of faces for general convex functions is developed, yielding a simple and unified approach for deriving inequalities balancing the various features of the optimization problem at hand, at the extreme points of the solution set.

### Intersecting singularities for multi-structured estimation

- Computer Science, MathematicsICML
- 2013

By analyzing theoretical properties of this family of regularizers, a new complexity index and a convex penalty approximating it are suggested, which come up with oracle inequalities and compressed sensing results ensuring the quality of the regularized estimator.

### ON DUALITY THEORY OF CONIC LINEAR PROBLEMS

- Mathematics
- 2001

In this paper we discuss duality theory of optimization problems with a linear objective function and subject to linear constraints with cone inclusions, referred to as conic linear problems. We…

### Exact solutions of infinite dimensional total-variation regularized problems

- MathematicsInformation and Inference: A Journal of the IMA
- 2018

We study the solutions of infinite dimensional inverse problems over Banach spaces. The regularizer is defined as the total variation of a linear mapping of the function to recover, while the data…

### Local Strong Homogeneity of a Regularized Estimator

- MathematicsSIAM J. Appl. Math.
- 2000

This paper deals with regularized pointwise estimation of discrete signals which contain large strongly homogeneous zones, where typically they are constant, or linear, or more generally satisfy a…

### The Convex Geometry of Linear Inverse Problems

- Computer ScienceFound. Comput. Math.
- 2012

This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems.

### Representer Theorems for Sparsity-Promoting $\ell _{1}$ Regularization

- MathematicsIEEE Transactions on Information Theory
- 2016

The main outcome of the investigation is that the use of l1 regularization is much more favorable for injecting prior knowledge: it results in a functional form that is independent of the system matrix, while this is not so in the l2 scenario.

### Convex analysis and minimization algorithms

- Mathematics
- 1993

IX. Inner Construction of the Subdifferential.- X. Conjugacy in Convex Analysis.- XI. Approximate Subdifferentials of Convex Functions.- XII. Abstract Duality for Practitioners.- XIII. Methods of…

### Continuous-Domain Solutions of Linear Inverse Problems With Tikhonov Versus Generalized TV Regularization

- MathematicsIEEE Transactions on Signal Processing
- 2018

The parametric form of the solution (representer theorems) is derived for Tikhonov (quadratic) and generalized total-variation (gTV) regularizations and it is shown that, in both cases, the solutions are splines that are intimately related to the regularization operator.