Learn More
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is(More)
We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods, by using total variation regularization. We obtain rigorous convergence results, and(More)
We propose simple and extremely efficient methods for solving the basis pursuit problem min{{u1 : Au = f, u ∈ R n }, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈R n μu1 + 1 2 Au − f(More)
An efficient and numerically stable dual algorithm for positive definite quadratic programming is described which takes advantage of the fact lhat the unconstrained minimum of the objective function can be used as a starling point. Its implementation utilizes the Cholesky and QR factorizations and procedures for updating them. The performance of the dual(More)
The matrix rank minimization problem has applications in many fields such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for(More)
We propose two line search primal-dual interior-point methods that approximately solve a sequence of equality constrained barrier subproblems. To solve each subproblem, our methods apply a modified Newton method and use an 2-exact penalty function to attain feasibility. Our methods have strong global convergence properties under standard assumptions.(More)
Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex(More)
In February 1979 a note by L. G. Khachiyan indicated how an ellipsoid method for linear programming can be implemented in polynomial time. This result has caused great excitement and stimulated a flood of technical papers. Ordinarily there would be no need for a survey of work so recent, but the current circumstances are obviously exceptional. Word of(More)