Gabriel Haeser

Learn More
We present two new constraint qualifications (CQ) that are weaker than the recently introduced Relaxed Constant Positive Linear Dependence (RCPLD) constraint qualification. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact(More)
In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification from Minchenko and Stakhovski that was called RCR. We show that RCPLD is enough to ensure the convergence of an(More)
Abstract. Carathéodory’s lemma states that if we have a linear combination of vectors in R, we can rewrite this combination using a linearly independent subset. This lemma has been successfully applied in nonlinear optimization in many contexts. In this work we present a new version of this celebrated result, in which we obtained new bounds for the size of(More)
In this paper we deal with optimality conditions that can be verified by a nonlinear optimization algorithm, where only a single Lagrange multiplier is avaliable. In particular, we deal with a conjecture formulated in [R. Andreani, J.M. Mart́ınez, M.L. Schuverdt, “On second-order optimality conditions for nonlinear programming”, Optimization, 56:529–542,(More)
Sequential optimality conditions have recently played an important role on the analysis of the global convergence of optimization algorithms towards first-order stationary points, justifying their stopping criteria. In this paper we introduce a sequential optimality condition that takes into account second-order information and that allows us to improve the(More)
Augmented Lagrangian methods with convergence to second-order stationary points in which any constraint can be penalized or carried out to the subproblems are considered in this work. The resolution of each subproblem can be done by any numerical algorithm able to return approximate second-order stationary points. The developed global convergence theory is(More)
In this paper we investigate how to efficiently apply ApproximateKarush-Kuhn-Tucker (AKKT) proximity measures as stopping criteria for optimization algorithms that do not generate approximations to Lagrange multipliers, in particular, Genetic Algorithms. We prove that for a wide range of constrained optimization problems the KKT error measurement tends to(More)