Learn More
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counterexamples will be shown. Algorithmic(More)
Sequential optimality conditions have recently played an important role on the analysis of the global convergence of optimization algorithms towards first-order stationary points, justifying their stopping criteria. In this paper we introduce a sequential optimality condition that takes into account second-order information and that allows us to improve the(More)
In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification from Minchenko and Stakhovski that was called RCR. We show that RCPLD is enough to ensure the convergence of an(More)
We present two new constraint qualifications (CQ) that are weaker than the recently introduced Relaxed Constant Positive Linear Dependence (RCPLD) constraint qualification. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact(More)
Carathéodory's lemma states that if we have a linear combination of vectors in ℝ , we can rewrite this combination using a linearly independent subset. This lemma has been successfully applied in nonlinear optimization in many contexts. In this work we present a new version of this celebrated result, in which we obtained new bounds for the size of the(More)
We introduce a new flexible Inexact-Restoration (IR) algorithm and an application to Multiobjective Constrained Optimization Problems (MCOP) under the weighted-sum scalarization approach. In IR methods each iteration has two phases. In the first phase one aims to improve feasibility and, in the second phase, one minimizes a suitable objective function. In(More)
In this paper we deal with optimality conditions that can be verified by a nonlinear optimization algorithm , where only a single Lagrange multiplier is avaliable. In particular, we deal with a conjecture formulated in [R. On second-order optimality conditions for nonlinear programming " , Optimization, 56:529–542, 2007], which states that whenever a local(More)
Augmented Lagrangian methods with convergence to second-order stationary points in which any constraint can be penalized or carried out to the subproblems are considered in this work. The resolution of each subproblem can be done by any numerical algorithm able to return approximate second-order stationary points. The developed global convergence theory is(More)