Elena Resmerita

Learn More
For the approximate solution of ill-posed inverse problems, the formulation of a regularization functional involves two separate decisions: the choice of the residual minimizer and the choice of the regularizor. In this paper, the Kullback–Leibler functional is used for both. The resulting regularization method can solve problems for which the operator and(More)
The function Df , called the Bregman distance associated with f , is always well defined because ∂f(x) is nonempty and bounded, for all x ∈ X (see, e.g., [22]), so that the infimum in (1.1) cannot be −∞. It is easy to check that Df (x, y) ≥ 0 and that Df (x, x) = 0, for all x, y ∈ X. If f is strictly convex then Df (x, y) = 0 only when x = y. For t ∈ [0,∞)(More)
A convergent iterative regularization procedure based on the square of a dual norm is introduced for image restoration models with general (quadratic or non-quadratic) convex fidelity terms. Iterative regularization methods have been previously employed for image deblurring or denoising in the presence of Gaussian noise, which use L 2 (Tadmor et al. in(More)
In this paper we establish criteria for the stability of the proximal mapping Prox φ = (∂φ+∂f)−1 associated to the proper lower semicontinuous convex functions φ and f on a reflexive Banach space X.We prove that, under certain conditions, if the convex functions φn converge in the sense of Mosco to φ and if ξn converges to ξ, then Prox f φn(ξn) converges to(More)
This work extends the existing convergence analysis for discrete approximations of minimizers of convex regularization functionals. In particular, some solution concepts are generalized, namely the standard minimum norm solutions for squared norm regularizers and the R-minimizing solutions for general convex regularizers, respectively. A central part of the(More)
  • 1