• Corpus ID: 245117877

From the simplex to the sphere: Faster constrained optimization using the Hadamard parametrization

@inproceedings{Li2021FromTS,
  title={From the simplex to the sphere: Faster constrained optimization using the Hadamard parametrization},
  author={Qiuwei Li and Daniel Mckenzie and Wotao Yin},
  year={2021}
}
The standard simplex in R n , also known as the probability simplex, is the set of nonnegative vectors whose entries sum up to 1. They frequently appear as constraints in optimization problems that arise in machine learning, statistics, data science, operations research, and beyond. We convert the standard simplex to the unit sphere and thus transform the corresponding constrained optimization problem into an optimization problem on a simple, smooth manifold. We show that KKT points and strict… 

Figures and Tables from this paper

The effect of smooth parametrizations on nonconvex optimization landscapes

TLDR
A general framework to study parametrizations by their effect on landscapes is introduced, enabling us to obtain new guarantees for an array of problems, some of which were previously treated on a case-by-case basis in the literature.

Doubly majorized algorithm for sparsity-inducing optimization problems with regularizer-compatible constraints

TLDR
By exploiting absolute-value symmetry and other properties in the sparsity-inducing regularizer, a new algorithm, called the Doubly Majorized Algorithm (DMA), is proposed, which makes use of projections onto the constraint set after the coordinate transformation in each iteration, and hence can be performed e-ciently.

References

SHOWING 1-10 OF 73 REFERENCES

NONLINEAR PROGRAMMING

An introduction to optimization on smooth manifolds

  • Available online, Aug.
  • 2020

Efficiently escaping saddle points on manifolds

Smooth, non-convex optimization problems on Riemannian manifolds occur in machine learning as a result of orthonormality, rank or positivity constraints. First- and second-order necessary optimality

Linearly Convergent Frank-Wolfe with Backtracking Line-Search.

TLDR
Variants of Away-steps and Pairwise FW that lift both restrictions simultaneously and inherit all the favorable convergence properties of the exact line-search version, including linear convergence for strongly convex functions over polytopes are proposed.

A feasible method for optimization with orthogonality constraints

TLDR
The Cayley transform is applied—a Crank-Nicolson-like update scheme—to preserve the constraints and based on it, curvilinear search algorithms with lower flops are developed with high efficiency for polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems.

Fast Projection onto the Simplex and the l1 Ball

A new algorithm is proposed to project, exactly and in finite time, a vector of arbitrary size onto a simplex or an ℓ 1 -norm ball. It can be viewed as a Gauss-Seidel-like variant of Michelot’s

The Gradient Projection Method Along Geodesics

The method of steepest descent for solving unconstrained minimization problems is well understood. It is known, for instance, that when applied to a smooth objective function f, and converging to a

Globally Convergent Optimization Algorithms on Riemannian Manifolds: Uniform Framework for Unconstrained and Constrained Optimization

TLDR
This paper proposes several globally convergent geometric optimization algorithms on Riemannian manifolds, which extend some existing geometric optimization techniques, and shows how these algorithms can be used to provide a uniform framework for constrained and unconstrained optimization problems.

Regularity versus Degeneracy in Dynamics, Games, and Optimization: A Unified Approach to Different Aspects

In this paper, links are established between optimality conditions for quadratic optimization problems, qualitative properties in the nonlinear selection replicator dynamics, and central solution

Mirror descent and nonlinear projected subgradient methods for convex optimization

...