The euclidean distance degree of orthogonally invariant matrix varieties

@article{Drusvyatskiy2016TheED,
  title={The euclidean distance degree of orthogonally invariant matrix varieties},
  author={Dmitriy Drusvyatskiy and Hon-leung Lee and Giorgio Ottaviani and Rekha R. Thomas},
  journal={Israel Journal of Mathematics},
  year={2016},
  volume={221},
  pages={291-316}
}
The Euclidean distance degree of a real variety is an important invariant arising in distance minimization problems. We show that the Euclidean distance degree of an orthogonally invariant matrix variety equals the Euclidean distance degree of its restriction to diagonal matrices. We illustrate how this result can greatly simplify calculations in concrete circumstances. 
Real symmetric matrices with partitioned eigenvalues
On the Geometry of the Set of Symmetric Matrices with Repeated Eigenvalues
TLDR
The volume of its intersection with the sphere is explicitly computed and a Eckart–Young–Mirsky-type theorem for the distance function from a generic matrix to points in Δ is proved.
Explicit Global Minimization of the Symmetrized Euclidean Distance by a Characterization of Real Matrices with Symmetric Square
TLDR
The optimal orthogonal matrices are determined which minimize the symmetrized Euclidean distance W(R,;D) =vert\vert{{sym}(R D - 1) .
A note on ED degrees of group-stable subvarieties in polar representations
In a recent paper, Drusvyatskiy, Lee, Ottaviani, and Thomas establish a “transfer principle” by means of which the Euclidean distance degree of an orthogonally-stable matrix variety can be computed
Algebraic degree of optimization over a variety with an application to $p$-norm distance degree
TLDR
An optimization problem with the feasible set being a real algebraic variety X and whose parametric objective function fu is gradient-solvable with respect to the parametric data u is studied, leading to the notion of algebraic degree of optimization on X.
Exact solutions in low-rank approximation with zeros
The maximum likelihood degree of sparse polynomial systems
. We consider statistical models arising from the common set of solutions to a sparse polynomial system with general coefficients. The maximum likelihood degree counts the number of critical points of
Tensors with eigenvectors in a given subspace
TLDR
This work considers the Kalman variety of tensors having singular t -tuples with the first component in a given linear subspace and proves analogous results, which are new even in the case of matrices, using Chern classes for enumerative computations.
The Critical Space for Orthogonally Invariant Varieties
Let q be a nondegenerate quadratic form on V. Let X ⊂ V be invariant for the action of a Lie group G contained in SO(V,q). For any f ∈ V consider the function df from X to $\mathbb C$ ℂ defined by
...
...

References

SHOWING 1-10 OF 39 REFERENCES
The Euclidean Distance Degree of an Algebraic Variety
TLDR
A theory of such nearest point maps of a real algebraic variety with respect to Euclidean distance from the perspective of computational algebraic geometry is developed.
Counting Real Critical Points of the Distance to Orthogonally Invariant Matrix Sets
TLDR
This paper provides a general framework to compute and count the real smooth critical points of a data matrix on an orthogonally invariant set of matrices and compares the results to the recently introduced notion of Euclidean distance degree of an algebraic variety.
A geometric perspective on the Singular Value Decomposition
This is an introductory survey, from a geometric perspec- tive, on the Singular Value Decomposition (SVD) for real matrices, focusing on the role of the Terracini Lemma. We extend this point of view
Locally symmetric submanifolds lift to spectral manifolds
In this work we prove that every locally symmetric smooth submanifold gives rise to a naturally defined smooth submanifold of the space of symmetric matrices, called spectral manifold, consisting of
Orthogonal Invariance and Identifiability
TLDR
This work proves the analogous result for the property of “identifiability,” a notion central to many active-set-type optimization algorithms, for permutation-invariant convex functions of the eigenvalues of a symmetric matrix, leading to the wide applicability of semidefinite programming algorithms.
Derivatives of Spectral Functions
TLDR
A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X if and only if the function f is differentiable at the vector λX, and the formula for the derivative is given.
Differentiability properties of isotropic functions
1. Introduction. Let Sym denote the linear space of all symmetric second-order tensors on an n-dimensional real vector space Vect with scalar product. (If Vect is identified with R n , then Sym may
An analog of the singular value decomposition for complex orthogonal equivalence
We show that a complex m-by-n matrix A can be factored as A=PΛQT , where P and Q are complex orthogonal matrices and Λ=[λ ij ] is an m-by-n generalized diagonal matrix (λ ij =0 if i ≠ j), if and only
...
...