Audrey Repetti

Learn More
We consider the minimization of a function G defined on R , which is the sum of a (non necessarily convex) differentiable function and a (non necessarily differentiable) convex function. Moreover, we assume that G satisfies the KurdykaLojasiewicz property. Such a problem can be solved with the Forward-Backward algorithm. However, the latter algorithm may(More)
A number of recent works have emphasized the prominent role played by the KurdykaLojasiewicz inequality for proving the convergence of iterative algorithms solving possibly nonsmooth/nonconvex optimization problems. In this work, we consider the minimization of an objective function satisfying this property, which is a sum of a non necessarily convex(More)
The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth(More)
Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in [23], several variants of block-coordinate primal-dual algorithms are designed in order to solve a wide array of monotone inclusion problems. These methods rely on a sweep of blocks of variables which are activated at each iteration according(More)
In this paper we determine the proximity functions of the sum and the maximum of componentwise (reciprocal) quotients of positive vectors. For the sum of quotients, denoted by Q1, the proximity function is just a componentwise shrinkage function which we call q-shrinkage. This is similar to the proximity function of the l1-norm which is given by(More)
This paper addresses the problem of recovering an image degraded by a linear operator and corrupted with an additive Gaussian noise with a signal-dependent variance. The considered observation model arises in several digital imaging devices. To solve this problem, a variational approach is adopted relying on a weighted least squares criterion which is(More)
In the context of next generation radio telescopes, like the Square Kilometre Array, the efficient processing of large-scale datasets is extremely important. Convex optimisation tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus(More)
Primal-dual proximal optimization methods have recently gained much interest for dealing with very large-scale data sets encoutered in many application fields such as machine learning, computer vision and inverse problems [1-3]. In this work, we propose a novel random block-coordinate version of such algorithms allowing us to solve a wide array of convex(More)
The solution of many applied problems relies on finding the minimizer of a sum of smooth and/or nonsmooth convex functions possibly involving linear operators. In the last years, primal-dual methods have shown their efficiency to solve such minimization problems, their main advantage being their ability to deal with linear operators with no need to invert(More)