Positive Bases in Numerical Optimization

  title={Positive Bases in Numerical Optimization},
  author={Ian D. Coope and C. J. Price},
  journal={Computational Optimization and Applications},
  • I. Coope, C. Price
  • Published 1 February 2002
  • Mathematics, Computer Science
  • Computational Optimization and Applications
The theory of positive bases introduced by C. Davis in 1954 does not appear in most modern texts on linear algebra but has re-emerged in publications in optimization journals. In this paper some simple properties of this highly useful theory are highlighted and applied to both theoretical and practical aspects of the design and implementation of numerical algorithms for nonlinear optimization. 
A short proof on the cardinality of maximal positive bases
This note provides a simple demonstration that relies on a fundamental property of basic feasible solutions in linear programming theory, which is easily shown that the cardinality of every positive basis is bounded below by n + 1.
On the properties of positive spanning sets and positive bases
The concepts of positive span and positive basis are important in derivative-free optimization. In fact, a well-known result is that if the gradient of a continuously differentiable objective
Grid-based methods for linearly equality constrained optimization problems
This paper describes a direct search method for a class of linearly constrained optimization problem. Through research we find it can be treated as an unconstrained optimization problem. And with the
A new quasi-Newton pattern search method based on symmetric rank-one update for unconstrained optimization
This paper proposes a new robust and quickly convergent pattern search method based on an implementation of OCSSR1 (Optimal Conditioning Based Self-Scaling Symmetric Rank-One) algorithm that is competitive in comparison with some other derivative-free methods.
Two minimal positive bases based direct search conjugate gradient methods for computationally expensive functions
  • Qunfeng Liu
  • Mathematics, Computer Science
    Numerical Algorithms
  • 2011
Two direct search methods for computational expensive functions are proposed based on the minimal positive bases, the Coope–Price’s frame-based direct search framework and a recently developed descent conjugate gradient method to accelerate convergence.
A deterministic algorithm to compute the cosine measure of a finite positive spanning set
In this paper, a deterministic algorithm to compute the cosine measure of any positive basis or finite positive spanning set is provided and is proven to return the exact value of thecosine measure in finite time.
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited, then turns to a broad class of methods for which the underlying principles allow general-ization to handle bound constraints and linear constraints.
Uniform simplex of an arbitrary orientation
It is proved that a uniform simplex has the greatest normalized volume of any simplex and it is shown how to create a uniform minimal positive basis from a uniformsimplex.
Nonlinear programming by mesh adaptive direct searches 1
This paper is intended not as a survey, but as an introduction to some ideas behind the class of mesh adaptive direct search (MADS) methods. Space limitations dictate a brief description of various
Introduction: Tools and Challenges in Derivative-Free and Blackbox Optimization
In this introductory chapter, a high-level description of optimization, blackbox optimization, and derivative-free optimization is presented, and some basic optimization notation used throughout this book is introduced.


Numerical Optimization (Springer Series in Operations Research and Financial Engineering)
Numerical optimization presents a graduate text, in continuous presents, that talks extensively about algorithmic performance and thinking, and about mathematical optimization in understanding of initiative.
A unified convergent theorem is given for a class of direct scarch techniques in nonlinear programming. This class of techniques is defined as the descent method with fixed step size. It tins the
A direct search conjugate directions algorithm for unconstrained minimization
A direct search algorithm for unconstrained minimization of smooth functions is described. The algorithm minimizes the function over a sequence of successively finer grids. Each grid is defined by a
We present two new classes of pattern search algorithms for unconstrained minimization: the rank ordered and the positive basis pattern search methods. These algorithms can nearly halve the worst
On the Convergence of Grid-Based Methods for Unconstrained Optimization
The convergence of direct search methods for unconstrained minimization is examined in the case where the underlying method can be interpreted as a grid or pattern search over successively refined
Frame Based Methods for Unconstrained Optimization
This paper describes a wide class of direct search methods for unconstrained optimization, which make use of fragments of grids called frames. Convergence is shown under mild conditions which allow
Pattern Search Algorithms for Bound Constrained Minimization
This work proves global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.
A Convergent Variant of the Nelder–Mead Algorithm
The Nelder–Mead algorithm (1965) for unconstrained optimization has been used extensively to solve parameter estimation and other problems. Despite its age, it is still the method of choice for many
`` Direct Search'' Solution of Numerical and Statistical Problems
The phrase "direct search" is used to describe sequential examination of trial solutions involving comparison of each trial solution with the "best" obtained up to that time together with a strategy for determining (as a function of earlier results) what the next trial solution will be.
Testing Unconstrained Optimization Software
A relatwely large but easy-to-use collection of test functions and designed gmdelines for testing the reliability and robustness of unconstrained optimization software.