# Optimization Algorithms for Data Analysis 1

@inproceedings{Wright2011OptimizationAF, title={Optimization Algorithms for Data Analysis 1}, author={Stephen J. Wright}, year={2011} }

where f is as in (1.0.1), ψ : Rn → R is a function that is usually convex and 12 usually nonsmooth, and λ > 0 is a regularization parameter.1 We refer to (1.0.2) as 13 a regularized minimization problem because the presence of the term involving ψ 14 induces certain structural properties on the solution, that make it more desirable 15 or plausible in the context of the application. We describe iterative algorithms 16 that generate a sequence {x}k=0,1,2,... of points that, in the case of convex…

## Figures from this paper

## References

SHOWING 1-10 OF 37 REFERENCES

### Exact matrix completion via convex optimization

- Computer Science, MathematicsCACM
- 2012

It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.

### Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results

- MathematicsMath. Program.
- 2011

An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank, an algorithm by Nesterov and Polyak and a proposal by Weiser et al.

### Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

- Computer ScienceFound. Trends Mach. Learn.
- 2011

It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.

### Robust principal component analysis?

- Computer ScienceJACM
- 2011

It is proved that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, this suggests the possibility of a principled approach to robust principal component analysis.

### Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

- Computer Science, MathematicsSIAM Rev.
- 2010

It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.

### Introductory Lectures on Convex Optimization - A Basic Course

- Computer ScienceApplied Optimization
- 2004

It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization, and it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments.

### A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization

- Computer ScienceMath. Program.
- 2003

A nonlinear programming algorithm for solving semidefinite programs (SDPs) in standard form that replaces the symmetric, positive semideFinite variable X with a rectangular variable R according to the factorization X=RRT.

### A Direct Formulation for Sparse PCA Using Semidefinite Programming

- Computer Science, MathematicsSIAM Rev.
- 2007

A modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, is used and derived to derive a semidefinite programming based relaxation for the problem.

### Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity

- Computer ScienceMath. Program.
- 2011

The approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians, and the orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubicmodel globally on each iteration.

### An Introduction to Optimization

- Computer ScienceIEEE Antennas and Propagation Magazine
- 1996

An Introduction to Optimization, Second Edition helps students build a solid working knowledge of the field, including unconstrained optimization, linear programming, and constrained optimization.