Complementarity and nondegeneracy in semidefinite programming

@article{Alizadeh1997ComplementarityAN,
  title={Complementarity and nondegeneracy in semidefinite programming},
  author={Farid Alizadeh and Jean Pierre Haeberly and Michael L. Overton},
  journal={Mathematical Programming},
  year={1997},
  volume={77},
  pages={111-128}
}
Primal and dual nondegeneracy conditions are defined for semidefinite programming. Given the existence of primal and dual solutions, it is shown that primal nondegeneracy implies a unique dual solution and that dual nondegeneracy implies a unique primal solution. The converses hold if strict complementarity is assumed. Primal and dual nondegeneracy assumptions do not imply strict complementarity, as they do in LP. The primal and dual nondegeneracy assumptions imply a range of possible ranks for… 
Constraint Nondegeneracy, Strong Regularity, and Nonsingularity in Semidefinite Programming
TLDR
This paper proves the equivalence between each of these conditions and the nonsingularity of Clarke's generalized Jacobian of the smoothed counterpart of this nonsmooth system used in several globally convergent smoothing Newton methods.
Equivalence of Two Nondegeneracy Conditions for Semidefinite Programs
Abstract Nondegeneracy assumptions are often needed in order to prove the local fast convergence of suitable algorithms as well as in the sensitivity analysis for semidefinite programs. One of the
Primal-Dual Affine-Scaling Algorithms Fail for Semidefinite Programming
TLDR
It is shown that the primal-dual affine-scaling algorithm using the NT direction for the same semidefinite programming problem always generates a sequence converging to the optimal solution.
Analyticity of weighted central paths and error bounds for semidefinite programming
TLDR
It is shown that every Cholesky-based weighted central path for semidefinite programming is analytic under strict complementarity, and this result is applied to homogeneous cone programming to show that the central paths defined by the known class of optimal self-concordant barriers are analytic in the presence of strictly complementary solutions.
Primal-dual Aane-scaling Algorithms Fail for Semideenite Programming
In this paper, we give an example of a semide nite programming problem in which primaldual a ne-scaling algorithms using the HRVW/KSH/M, MT, and AHO directions fail. We prove that each of these
Sensitivity of Solutions to Semidefinite Programs
Differential sensitivity of solutions to semidefinite programs are obtained by applying the implicit function theorem to a system of equations satisfied by the solutions. A certain Jacobian matrix
Universal duality in conic convex optimization
TLDR
The fact that the feasible sets of a primal convex program and its dual cannot both be bounded, unless they are both empty is related to universal duality.
A COMPARISON OF THREE NONDEGENERACY CONDITIONS FOR SEMIDEFINITE PROGRAMS
Nondegeneracy assumptions are often needed in order to prove local fast convergence of suitable algorithms as well as in the sensitivity analysis for semidefinite programs. Here we investigate the
Local Duality of Nonlinear Semidefinite Programming
  • H. Qi
  • Mathematics
    Math. Oper. Res.
  • 2009
TLDR
This paper introduces the dual SSOSC at a Karush-Kuhn-Tucker triple of NSDP and study its various characterizations and relationships to the primal nondegeneracy, and reveals that the nearest correlation matrix problem satisfies not only the primal and dualSSOSC but also the Primal and dual nondEGeneracy at its solution, suggesting that it is a well-conditioned QSDP.
Optimization with Semidefinite, Quadratic and Linear Constraints
We consider optimization problems where variables have either linear, or convex quadratic or semideenite constraints. First, we deene and characterize primal and dual nondegeneracy and strict
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 20 REFERENCES
Primal-Dual Interior-Point Methods for Semidefinite Programming: Convergence Rates, Stability and Numerical Results
TLDR
The XZ+ZX method is more robust with respect to its ability to step close to the boundary, converges more rapidly, and achieves higher accuracy than other methods considered, including Mehrotra predictor-corrector variants and issues of numerical stability.
On the Rank of Extreme Matrices in Semidefinite Programs and the Multiplicity of Optimal Eigenvalues
  • G. Pataki
  • Mathematics, Computer Science
    Math. Oper. Res.
  • 1998
TLDR
It is proved that clustering must occur at extreme points of the set of optimal solutions, if the number of variables is sufficiently large and a lower bound on the multiplicity of the critical eigenvalue is given.
First and second order analysis of nonlinear semidefinite programs
TLDR
Convexity, duality and first-order optimality conditions for nonlinear semidefinite programming problems are presented and sensitivity analysis of such programs is discussed.
Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices
TLDR
The sum of the largest eigenvalues of a symmetric matrix is a nonsmooth convex function of the matrix elements, giving a concise characterization of the subdifferential in terms of a dual matrix.
First and Second Order Analysis of Nonlinear Semideenite Programs
In this paper we study nonlinear semideenite programming problems. Convexity, duality and rst-order optimality conditions for such problems are presented. A second-order analysis is also given.
Interior-point polynomial algorithms in convex programming
TLDR
This book describes the first unified theory of polynomial-time interior-point methods, and describes several of the new algorithms described, e.g., the projective method, which have been implemented, tested on "real world" problems, and found to be extremely efficient in practice.
On Eigenvalue Optimization
TLDR
A general framework for a smooth (differentiable) approach to optimization problems involving eigenvalues of symmetric matrices is presented, based on the concept of transversality borrowed from differential geometry.
Second Derivatives for Optimizing Eigenvalues of Symmetric Matrices
TLDR
The main idea is to minimize the maximum eigen value subject to a constraint that this eigenvalue has a certain multiplicity, and the manifold $\Omega$ of matrices with such multiple eigenvalues is parameterized using a matrix exponential representation, leading to the definition of an appropriate Lagrangian function.
ON MATRICES DEPENDING ON PARAMETERS
1. Классические результаты (Аполлоний, Декарт, Ньютон, Харнак). Шестнадцатая проблема Гильберта.
...
1
2
...