Nonlinear residual minimization by iteratively reweighted least squares

  title={Nonlinear residual minimization by iteratively reweighted least squares},
  author={Juliane Sigl},
  journal={Computational Optimization and Applications},
  • Juliane Sigl
  • Published 26 April 2015
  • Mathematics, Computer Science
  • Computational Optimization and Applications
In this paper we address the numerical solution of minimal norm residuals of nonlinear equations in finite dimensions. We take particularly inspiration from the problem of finding a sparse vector solution of phase retrieval problems by using greedy algorithms based on iterative residual minimizations in the $$\ell _p$$ℓp-norm, for $$1 \le p \le 2$$1≤p≤2. Due to the mild smoothness of the problem, especially for $$p \rightarrow 1$$p→1, we develop and analyze a generalized version of iteratively… 
An Analysis of Sketched IRLS for Accelerated Sparse Residual Regression
It is shown that one of the most popular solution methods, iteratively reweighted least squares (IRLS), can be significantly accelerated by the use of matrix sketching and its efficiency on a range of computer vision applications is shown.
Inexact alternating optimization for phase retrieval with outliers
The Cramér-Rao bound for Laplacian as well as Gaussian noise is derived for the measurement model considered, and simulations show that the proposed approach outperforms state-of-the-art algorithms in heavy-tailed noise.
Inexact Alternating Optimization for Phase Retrieval in the Presence of Outliers
Simulations demonstrate that the proposed algorithms approach the CRB and outperform state-of-the-art algorithms in heavy-tailed noise.
Most of the available phase retrieval algorithms were explicitly or implicitly developed under a Gaussian noise model, using least squares (LS) formulations. However, in some applications of phase
A Weak Selection Stochastic Gradient Matching Pursuit Algorithm
A weak selection threshold method is proposed to select the atoms that best match the original signal and has better reconstruction performance than the stochastic gradient algorithms when the original signals were a one-dimensional sparse signal, a two-dimensional image signal, and a low-rank matrix signal.
An algorithm for real and complex rational minimax approximation
A method to solve a wide range of problems on arbitrary domains in a fraction of a second of laptop time by a procedure consisting of two steps, the {\em AAA-Lawson algorithm,} available in Chebfun.
Robustness by Reweighting for Kernel Estimators: An Overview
Using least squares techniques, there is an awareness of the dangers posed by the occurrence of outliers present in the data. In general, outliers may totally spoil an ordinary least squares
Extended OMP algorithm for sparse phase retrieval
A novel algorithm for sparse phase retrieval and its modified version which has high recovery rate are proposed and the quartic coherence of measurement matrix is first put forward to analyze recovery condition.
A novel dictionary learning method based on total least squares approach with application in high dimensional biological data
This study proposes a novel robust dictionary learning algorithm, based on the total least squares, that could consider the inexactness of data in modeling and results indicate that the method performs better than other dictionary learning methods on high dimensional data.
Online and Batch Supervised Background Estimation Via L1 Regression
A surprisingly simple model to estimate supervised video backgrounds based on L1 regression, which matches or outperform other state-of-the-art online and batch methods that are both supervised and unsupervised in virtually all quantitative and qualitative measures and in fractions of their execution time.


On Iteratively Reweighted Algorithms for Nonsmooth Nonconvex Optimization in Computer Vision
The proposed algorithm sequentially optimizes suitably constructed convex majorizers and Convergence to a critical point is proved when the Kurdyka--Łojasiewicz property and additional mild restrictions hold for the objective function.
Iteratively reweighted least squares minimization for sparse recovery
Under certain conditions (known as the restricted isometry property, or RIP) on the mN matrix ˆ (where m<N ), vectors x 2 R N that are sparse (i.e., have most of their entries equal to 0) can be
Low-rank Matrix Recovery via Iteratively Reweighted Least Squares Minimization
An efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements designed for the simultaneous promotion of both a minimal nuclear norm and an approximately low-rank solution is presented.
Iteratively Re-weighted Least Squares minimization: Proof of faster than linear rate for sparse recovery
A specific recipe for updating weights is given that avoids technical shortcomings in other approaches, and for which it is shown that whenever the solution at a given iteration is sufficiently close to the limit, then the remaining steps of the algorithm converge exponentially fast.
Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm
A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided and Mathematical results on conditions for uniqueness of sparse solutions are also given.
The standard method for solving least squares problems which lead to non-linear normal equations depends upon a reduction of the residuals to linear form by first order Taylor approximations taken
Algorithms for Nonlinear Least-Squares Problems
Abstract : This paper addresses the nonlinear least-squares problem which arises most often in data fitting applications. Much research has focused on the development of specialized algorithms that
Algorithms for the Solution of the Nonlinear Least-Squares Problem
This paper describes a modification to the Gauss–Newton method for the solution of nonlinear least-squares problems. The new method seeks to avoid the deficiencies in the Gauss–Newton method by
Linearly Constrained Nonsmooth and Nonconvex Minimization
A novel algorithm to perform nonsmooth and nonconvex minimizations with linear constraints in Euclidean spaces is introduced and it is shown how this algorithm is actually a natural generalization of the well-known nonstationary augmented Lagrangian method for convex optimization.
Nonlinear total variation based noise removal algorithms
A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of