Learn More
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is(More)
We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods, by using total variation regularization. We obtain rigorous convergence results, and(More)
An efficient and numerically stable dual algorithm for positive definite quadratic programming is described which takes advantage of the fact lhat the unconstrained minimum of the objective function can be used as a starling point. Its implementation utilizes the Cholesky and QR factorizations and procedures for updating them. The performance of the dual(More)
We propose simple and extremely efficient methods for solving the basis pursuit problem min{{u1 : Au = f, u ∈ R n }, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈R n μu1 + 1 2 Au − f(More)
The matrix rank minimization problem has applications in many fields such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for(More)
We present an alternating direction method based on an augmented Lagrangian framework for solving semidef-inite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a two-splitting scheme, minimizes the dual augmented Lagrangian function sequentially with respect to the Lagrange multipliers corresponding to the linear(More)
The Armijo and Goldstein step-size rules are modified to allow steps along a curvilinear path of the form x(a) = x + as + a2d, where x is the current estimate of the minimum, s is a descent direction and d is a nonascent direction of negative curvature. By using directions of negative curvature when they exist, we are able to prove, under fairly mild(More)
Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex(More)