Learn More
A general scheme for trust-region methods on Riemannian manifolds is proposed. A truncated conjugate-gradient algorithm is utilized to solve the trust-region subproblems. The method is illustrated on several problems from numerical linear algebra. In particular, for computing an extreme eigenspace of a symmetric/positive-definite matrix pencil, the method(More)
We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = Y Y T , where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X. It is thus very effective for solving problems that have a(More)
In the early eighties Lojasiewicz [Loj84] proved that a bounded solution of a gradient flow for an analytic cost function converges to a well-defined limit point. In this paper, we show that the iterates of numerical descent algorithms, for an analytic cost function, share this convergence property if they satisfy certain natural descent conditions. The(More)
Optimization on manifolds is a rapidly developing branch of nonlinear optimization. Its focus is on problems where the smooth geometry of the search space can be leveraged to design efficient numerical algorithms. In particular, optimization on manifolds is well-suited to deal with rank and orthogonality constraints. Such structured constraints appear(More)
We consider large matrices of low rank. We address the problem of recovering such matrices when most of the entries are unknown. Matrix completion finds applications in recommender systems. In this setting, the rows of the matrix may correspond to items and the columns may correspond to users. The known entries are the ratings given by users to some items.(More)
PREPRINT Abstract This paper studies the relations between the local minima of a cost function f and the stable equilibria of the gradient descent flow of f. In particular, it is shown that, under the assumption that f is real analytic, local minimality is necessary and sufficient for stability. Under the weaker assumption that f is indefinitely(More)
We propose a Newton-like iteration that evolves on the set of fixed dimensional subspaces of R n and converges locally cubically to the invariant subspaces of a symmetric matrix. This iteration is compared in terms of numerical cost and global behavior with three other methods that display the same property of cubic convergence. Moreover, we consider(More)
Several blind source separation algorithms obtain a separating matrix by computing the congruence transformation that " best " diagonalizes a collection of covariance matrices. Recent methods avoid a pre-whitening phase and directly attempt to compute a non-orthogonal diagonalizing congruence. However, since the magnitude of the sources is unknown, there is(More)
Newton's method for solving the matrix equation F(X) identical to AX-XX(T) AX = 0 runs up against the fact that its zeros are not isolated. This is due to a symmetry of F by the action of the orthogonal group. We show how differential-geometric techniques can be exploited to remove this symmetry and obtain a "geometric" Newton algorithm that finds the zeros(More)