Learn More
This work is concerned with the development and study of a class of limited memory preconditioners for the solution of sequences of linear systems. To this aim, we consider linear systems with the same symmetric positive definite matrix and multiple right-hand sides available in sequence. We first propose a general class of preconditioners, called Limited(More)
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a(More)
A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not require the explicit computation of the projected gradient,(More)
A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a mean of speeding up the computation of the step. This use is recursive,(More)
We consider the local convergence properties of the class of augmented La-grangian methods for solving nonlinear programming problems whose global convergence properties are analyzed by Conn et al. (1993a). In these methods, linear constraints are treated separately from more general constraints. These latter constraints are combined with the objective(More)
We consider the global convergence properties of a class of augmented La-grangian methods for solving nonlinear programming problems. In the proposed method, linear constraints are treated separately from more general constraints. Thus only the latter are combined with the objective function in an augmented Lagrangian. The sub-problem then consists of(More)
It is well known that the norm of the gradient m a y be unreliable as a stopping test in unconstrained optimization, and that it often exhibits oscillations in the course of the optimization. In this paper we present results describing the properties of the gradient norm for the steepest descent method applied to quadratic objective functions. We also make(More)
Preface In spite of the fact that their motivation and development originally occured at diierent times, graph theory and optimization are elds of mathematics which nowadays have many connections. Early on, the use of graphs suggested intuitive approaches to both pure and applied problems. Optimization and more precisely mathematical programming have(More)
Convergence properties of trust-region methods for unconstrained nonconvex optimization is considered in the case where information on the objective func-tion's local curvature is incomplete, in the sense that it may be restricted to a fixed set of " test directions " and may not be available at every iteration. It is shown that convergence to local " weak(More)