A Support Function Based Algorithm for Optimization with Eigenvalue Constraints

  title={A Support Function Based Algorithm for Optimization with Eigenvalue Constraints},
  author={Emre Mengi},
  journal={SIAM J. Optim.},
  • E. Mengi
  • Published 22 February 2017
  • Mathematics
  • SIAM J. Optim.
Optimization of convex functions subject to eigenvalue constraints is intriguing because of peculiar analytical properties of eigenvalue functions and is of practical interest because of a wide range of applications in fields such as structural design and control theory. Here we focus on the optimization of a linear objective subject to a constraint on the smallest eigenvalue of an analytic and Hermitian matrix-valued function. We propose a numerical approach based on quadratic support… 

Figures and Tables from this paper

Nonsmooth Rate-of-Convergence Analyses of Algorithms for Eigenvalue Optimization
This work considers two recent algorithms to minimize the largest eigenvalue of a Hermitian matrix dependent on one parameter, both proven to be globally convergent unaffected by non-smoothness, and proves that both algorithms converge rapidly.
Nonsmooth algorithms for minimizing the largest eigenvalue with applications to inner numerical radius
Two recent algorithms to minimize the largest eigenvalue of a Hermitian matrix dependent on one parameter are considered, both proven to be globally convergent unaffected by nonsmoothness.
Computation of pseudospectral abscissa for large-scale nonlinear eigenvalue problems
In contrast to existing iterative approaches based on constructing low-rank perturbations and rightmost eigenvalue computations, the algorithm relies on computing only singular values of complex matrices, thereby further increasing efficiency and reliability.
Large-Scale and Global Maximization of the Distance to Instability
  • E. Mengi
  • Mathematics, Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2018
This work considers the maximization of the distance to instability of a matrix dependent on several parameters, a nonconvex optimization problem that is likely to be nonsmooth and proposes a globally convergent algorithm when the matrix is of small size and depends on a few parameters.
Efficient Low-Rank Solution of Large-Scale Matrix Equations
Improved low-rank ADI methods for Lyapunov and Sylvester equations are used in Newton type methods for finding approximate solutions of quadratic matrix equations in the form of symmetric, continuous-time, but also more general nonsymmetric, algebraic Riccati equations.
Approximate residual-minimizing shift parameters for the low-rank ADI iteration
  • Patrick Kurschner
  • Computer Science
    ETNA - Electronic Transactions on Numerical Analysis
  • 2019
This article investigates self-generating shift parameters based on a minimization principle for the Lyapunov residual norm which outperform existing precomputed and dynamic shift parameter selection techniques, although their generation is more involved.


Numerical Optimization of Eigenvalues of Hermitian Matrix Functions
The global convergence of the algorithm is proved and it is shown that it can be effectively used for the minimization of extreme eigenvalues, e.g., the largest eigenvalue or the sum of the largest specified number of eigen values.
Structural Topology Optimization with Eigenvalues
The paper discusses interrelations of the problems and shows how solutions of one problem can be derived from solutions of the others, and presents equivalent reformulations as semidefinite programming problems with the property that, for the minimum volume and minimum compliance problem, each local optimizer of these problems is also a global one.
A Redistributed Proximal Bundle Method for Nonconvex Optimization
This work proposes an approach based on generating cutting-planes models, not of the objective function as most bundle methods do but of a local convexification of the subjective function, which opens the way to create workable nonconvex algorithms based on non Convex $\mathcal{VU}$ theory.
Algorithms for the computation of the pseudospectral radius and the numerical radius of a matrix
Two useful measures of the robust stability of the discrete-time dynamical system xk+1 = Axk are the � -pseudospectral radius and the numerical radius of A. The � -pseudospectral radius of A is the
Newton's method for convex programming and Tchebycheff approximation
The rationale of Newton's method is exploited here in order to develop effective algorithms for solving the following general problem: given a convex continuous function F defined on a closed convex subset K of E,~, obtain a point x of K such that F(x)_<_F(y) for all y in K.
Fast Algorithms for the Approximation of the Pseudospectral Abscissa and Pseudospectral Radius of a Matrix
New algorithms based on computing only the spectral abscissa or radius of a sequence of matrices are presented, generating a sequences of lower bounds for the pseudospectral absc Melissa or radius, proving a locally linear rate of convergence for $\varepsilon$ sufficiently small.
Robust stability and a criss‐cross algorithm for pseudospectra
An algorithm for computing the ‘pseudospectral abscissa’, which is the largest real part of such an eigenvalue, measures the robust stability of A, and proves global and local quadratic convergence.
Survey of Bundle Methods for Nonsmooth Optimization
An overview of the development and history of the bundle methods from the seventies to the present is given, which focuses on the convex unconstrained case with a single objective function.
Methods of Descent for Nondifferentiable Optimization
Methods with subgradient locality measures for minimizing nonconvex functions and methods of feasible directions for non Convex constrained problems for convex constrained minimization problems are described.
Matrix analysis
This new edition of the acclaimed text presents results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrates their importance in a variety of applications.