Local Convex Conjugacy and Fenchel Duality

@article{Bertsekas1978LocalCC,
  title={Local Convex Conjugacy and Fenchel Duality},
  author={Dimitri P. Bertsekas},
  journal={IFAC Proceedings Volumes},
  year={1978},
  volume={11},
  pages={1079-1084}
}
  • D. Bertsekas
  • Published 1978
  • Mathematics
  • IFAC Proceedings Volumes
Fenchel Duality Theory and a Primal-Dual Algorithm on Riemannian Manifolds
This paper introduces a new notion of a Fenchel conjugate, which generalizes the classical Fenchel conjugation to functions defined on Riemannian manifolds. We investigate its properties, e.g., the
Fenchel Duality for Convex Optimization and a Primal Dual Algorithm on Riemannian Manifolds
TLDR
A new duality theory is introduced that generalizes the classical Fenchel conjugation to functions defined on Riemannian manifolds and proves its convergence for the case of Hadamard manifolds under appropriate assumptions.
Fenchel Duality and a Separation Theorem on Hadamard Manifolds
In this paper, we introduce a definition of Fenchel conjugate and Fenchel biconjugate on Hadamard manifolds based on the tangent bundle. Our definition overcomes the inconvenience that the conjugate
Randomized Block Cubic Newton Method
TLDR
RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases, and outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.
Incremental Aggregated Proximal and Augmented
TLDR
A closely related linearly convergent method for minimization of large differentiable sums subject to an orthant constraint is proposed, which may be viewed as an incremental aggregated version of the mirror descent method.
Incremental Aggregated Proximal and Augmented Lagrangian Algorithms
TLDR
Dual versions of incremental proximal algorithms are considered, which are incremental augmented Lagrangian methods for separable equality-constrained optimization problems and a closely related linearly convergent method for minimization of large dierentiable sums subject to an orthant constraint is proposed.
Bregman Monotone Operator Splitting
TLDR
The proposed Bregman monotone operator splitting (B-MOS) is applied to an application and it is shown that an appropriate design of the B Regman divergence leads to faster convergence than conventional splitting algorithms.

References

SHOWING 1-2 OF 2 REFERENCES
Combined Primal–Dual and Penalty Methods for Convex Programming
TLDR
A class of combined primal–dual and penalty methods for constrained minimization which generalizes the method of multipliers is proposed and analyzed and it is shown that the rate of convergence may be linear or superlinear with arbitrary Q-order of convergence depending on the problem at hand.
Convexification procedures and decomposition methods for nonconvex optimization problems
TLDR
This paper considers a procedure by means of which a nonconvex problem is convexified and transformed into one which can be solved with the aid of primal-dual methods, and separability of the type necessary for application of decomposition algorithms is preserved.