Learn More
We propose a convergence analysis of accelerated forward-backward splitting methods for composite function minimization, when the proximity operator is not available in closed form, and can only be computed up to a certain precision. We prove that the 1/k 2 convergence rate for the function values can be achieved if the admissible errors are of a certain(More)
Methanogen populations and their domains are poorly understood; however, in recent years, research on this topic has emerged. The relevance of this field has also been enhanced by the growing economic interest in methanogen skills, particularly the production of methane from organic substrates. Management attention turned to anaerobic wastes digestion(More)
We deal with the problem of variable selection when variables must be selected group-wise, with possibly overlapping groups defined a priori. In particular we propose a new optimization procedure for solving the regularized algorithm presented in [12], where the group lasso penalty is generalized to overlapping groups of variables. While in [12] the(More)
In this paper we propose a general framework to characterize and solve the optimization problems underlying a large class of sparsity based regularization algorithms. More precisely, we study the minimization of learning functionals that are sums of a differentiable data term and a convex non differentiable penalty. These latter penalties have recently(More)
Proximal methods have recently been shown to provide effective optimization procedures to solve the variational problems defining the 1 regularization algorithms. The goal of the paper is twofold. First we discuss how proximal methods can be applied to solve a large class of machine learning algorithms which can be seen as extensions of 1 regularization,(More)
Recently, considerable research efforts have been devoted to the design of methods to learn from data overcomplete dictionaries for sparse coding. However, learned dictionaries require the solution of an optimization problem for coding new data. In order to overcome this drawback, we propose an algorithm aimed at learning both a dictionary and its dual: a(More)
In this paper we consider a regularization approach to variable selection when the regression function depends nonlinearly on a few input variables. The proposed method is based on a regularized least square estimator penalizing large values of the partial derivatives. An efficient iterative procedure is proposed to solve the underlying variational problem,(More)
An extension of the Gauss-Newton algorithm is proposed to find local minimiz-ers of penalized nonlinear least squares problems, under generalized Lipschitz assumptions. Convergence results of local type are obtained, as well as an estimate of the radius of the convergence ball. Some applications for solving constrained nonlinear equations are discussed and(More)
In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to consider a sparse nonparametric model, hence avoiding linear or additive models. The key idea is to measure the importance of each variable in the(More)