Revisiting the Hahn-Banach Theorem and Nonlinear Infinite Programming

  title={Revisiting the Hahn-Banach Theorem and Nonlinear Infinite Programming},
  author={P. Montiel L{\'o}pez and Manuel Ruiz Gal{\'a}n},
  journal={arXiv: Functional Analysis},
Infinite programming and theorems of the alternative
In this paper, we obtain optimal versions of the Karush–Kuhn–Tucker, Lagrange multiplier, and Fritz John theorems for a nonlinear infinite programming problem where both the number of equality and
From Hahn–Banach Type Theorems to the Markov Moment Problem, Sandwich Theorems and Further Applications
The aim of this review paper is to recall known solutions for two Markov moment problems, which can be formulated as Hahn–Banach extension theorems, in order to emphasize their relationship with the
Polynomial Approximation on Unbounded Subsets, Markov Moment Problem and Other Applications
This paper starts by recalling the author’s results on polynomial approximation over a Cartesian product A of closed unbounded intervals and its applications to solving Markov moment problems. Under
A minimax approach for inverse variational inequalities
N A ] 2 0 Fe b 20 20 A minimax approach for inverse variational inequalities
In this work, we characterize the existence of a solution for a certain variational inequality by means of a classical minimax theorem. In addition, we propose a numerical algorithm for the solution


Nonlinear Programming via König's Maximum Theorem
An equivalent version of that fundamental result for finite dimensional spaces, which is a sharp generalization of Konig's Maximum theorem, implies several optimal statements of the Lagrange multipliers, Karush/Kuhn---Tucker, and Fritz John type for nonlinear programs with an objective function subject to both equality and inequality constraints.
The Hahn–Banach–Lagrange theorem
This article is about a new version of the Hahn–Banach theorem, which we will call the “Hahn–Banach–Lagrange theorem”, since it deals very effectively with certain problems of Lagrange type, as well
In this paper, based on the extended versions of the Farkas lemma for convex systems introduced recently in [9], we establish  an extended version  of a so called Hahn-Banach-Lagrange theorem
An elementary proof of the Karush–Kuhn–Tucker theorem in normed linear spaces for problems with a finite number of inequality constraints
We present an elementary proof of the Karush–Kuhn–Tucker theorem for the problem with a finite number of nonlinear inequality constraints in normed linear spaces under the linear independence
A sharp Lagrange multiplier theorem for nonlinear programs
It is shown that the duality between optimal solutions and saddle points for the corresponding Lagrangian is equivalent to the infsup-convexity, a not very restrictive generalization of convexity which arises naturally in minimax theory of a finite family of suitable functions.
Bootstrapping the Mazur--Orlicz--K\"onig theorem
In this paper, we give some extensions of K\"onig's extension of the Mazur-Orlicz theorem. These extensions include generalizations of a surprising recent result of Sun Chuanfeng, and generalizations
Sublinear functionals and conical measures
Abstract. The paper is devoted to the concept of conical measures which is central for the Choquet theory of integral representation in its final version. The conical measures need not be continuous
Karush-Kuhn-Tucker Conditions for Nonsmooth Mathematical Programming Problems in Function Spaces
Apply Lagrange multiplier rules for abstract optimization problems with mixed smooth and convex terms in the cost to minimum norm control problems, and a class of optimal control problems with distributed state constraints and nonsmooth cost.
On a generalized sup-inf problem
In this paper, necessary and sufficient conditions for solvability of nonlinear inequality systems are given using certain generalized convexity concepts. Our results imply some theorems of
The Kuhn-Tucker Theorem in Concave Programming
In order to solve problems of constrained extrema, it is customary in the calculus to use the method of the Lagrangian multiplier. Let us, for example, consider a problem: maximize f(x1,•••, xn)