Non-convergence of the L-curve regularization parameter selection method

@article{Vogel1996NonconvergenceOT,
  title={Non-convergence of the L-curve regularization parameter selection method},
  author={Curtis R. Vogel},
  journal={Inverse Problems},
  year={1996},
  volume={12},
  pages={535-547}
}
  • C. Vogel
  • Published 1 August 1996
  • Mathematics
  • Inverse Problems
The L-curve method was developed for the selection of regularization parameters in the solution of discrete systems obtained from ill-posed problems. An analysis of this method is given for selecting a parameter for Tikhonov regularization. This analysis, which is carried out in a semi-discrete, semi-stochastic setting, shows that the L-curve approach yields regularized solutions which fail to converge for a certain class of problems. A numerical example is also presented which indicates that… 

Figures from this paper

Limitations of the L-curve method in ill-posed problems

This paper considers the Tikhonov regularization method with the regularization parameter chosen by the so-called L-curve criterion. An infinite dimensional example is constructed for which the

Direct analytic model of the L-curve for Tikhonov regularization parameter selection

We present an approximation to a direct, non-parametrized analytic expression for the L-curve used in the regularization of ill-conditioned linear systems which is constructed starting from the exact

Direct analytic model of the L-curve for Tikhonov regularization parameter selection

We present an approximation to a direct, non-parametrized analytic expression for the L-curve used in the regularization of ill-conditioned linear systems which is constructed starting from the exact

The L-curve and its use in the numerical treatment of inverse problems

The L-curve and use the numerical The L-curve is a log-log plot of the norm of a regularized solution versus the norm of the corresponding residual norm. It is a convenient graphical tool for

A simplified L-curve method as error estimator

TLDR
A simplified version of the L-curve method is proposed that replaces the curvature essentially by the derivative of the parameterization on the $y$-axis, and it is shown that this method may serve as an error estimator under typical conditions.

Optimization tools for solving nonlinear ill-posed problems

Using the L- and a-curve, we consider how a nonlinear ill-posed Tikhonov regularized problem can be solved by a Gauss-Newton method. The solution to the problem is chosen from the point on the

Old and new parameter choice rules for discrete ill-posed problems

TLDR
This paper studies the performance of known and new approaches to choosing a suitable value of the regularization parameter for the truncated singular value decomposition method and for the LSQR iterative Krylov subspace method in the situation when no accurate estimate of the norm of the error in the data is available.

L- and V-curves for optimal smoothing

The L-curve is a tool for the selection of the regularization parameter in ill-posed inverse problems. It is a parametric plot of the size of the residuals vs that of the penalty. The corner of the L

Optimization tools for Tikhonov regularization of nonlinear equations using the L-curve and its dual

We consider the regularization of the finite dimensional nonlinear system of equations f(x)=0. The regularization is performed by formulating a Tikhonov problem with an unknown regularization
...

References

SHOWING 1-10 OF 17 REFERENCES

Limitations of the L-curve method in ill-posed problems

This paper considers the Tikhonov regularization method with the regularization parameter chosen by the so-called L-curve criterion. An infinite dimensional example is constructed for which the

The Use of the L-Curve in the Regularization of Discrete Ill-Posed Problems

TLDR
A unifying characterization of various regularization methods is given and it is shown that the measurement of “size” is dependent on the particular regularization method chosen, and a new method is proposed for choosing the regularization parameter based on the L-curve.

Using the L--curve for determining optimal regularization parameters

Summary. The ``L--curve'' is a plot (in ordinary or doubly--logarithmic scale) of the norm of (Tikhonov--) regularized solutions of an ill--posed problem versus the norm of the residuals. We show

A Regularization Parameter in Discrete Ill-Posed Problems

TLDR
An analysis of the shape of this plot is given and a theoretical justification for choosing the regularization parameter so it is related to the "L-corner" of the plot considered in the logarithmic scale is given.

Analysis of Discrete Ill-Posed Problems by Means of the L-Curve

TLDR
The main purpose of this paper is to advocate the use of the graph associated with Tikhonov regularization in the numerical treatment of discrete ill-posed problems, and to demonstrate several important relations between regularized solutions and the graph.

Convergence rates of approximate least squares solutions of linear integral and operator equations of the first kind

We consider approximations {xn } obtained by moment discretization to (i) the minimal ?2-norm solution of XCx = y where XC is a Hilbert-Schmidt integral operator on ?2, and to (ii) the least squares

Optimal choice of a truncation level for the truncated SVD solution of linear first kind integral equations when data are noisy

Given error contaminated discrete data $z_i = \int_0^1 {k(s_i ,t)f(t)dt + \varepsilon _i } $, $i = 1, \cdots ,n$, we apply the truncated singular value decomposition to find an approximate solution

Spline Models for Observational Data

Foreword 1. Background 2. More splines 3. Equivalence and perpendicularity, or, what's so special about splines? 4. Estimating the smoothing parameter 5. 'Confidence intervals' 6. Partial spline

Inverse Problems in the Mathematical Sciences

Contents: Introduction - Inverse problems modeled by integral equations of the first kind: Causation - Parameter estimation in differential equations: Model identification - Mathematical background

Practical Approximate Solutions to Linear Operator Equations When the Data are Noisy

  • G. Wahba
  • Mathematics, Computer Science
  • 1977
TLDR
It is shown that the weighted cross-validation estimate of $\hat \lambda $ estimates the value of $\lambda $ which minimizes $({1 / n) E\sum\nolimits_{j = 1}^n {[(\mathcal{K}f_{n,\lambda } )(t_j ) - (\mathcal(K)f)(t-j )]} ^2 $ .