Learn More
It is shown how standard iterative methods for solving linear and nonlinear equations can be approached from the point of view of control. Appropriate choices of control Liapunov functions lead to both continuous and discrete-time versions of the well known Newton-Raphson and conjugate gradient algorithms as well as their common variants. Insights into(More)
It is pointed out that the so called momentum method, much used in the neural network literature as an acceleration of the backpropagation method, is a stationary version of the conjugate gradient method. Connections with the continuous optimization method known as heavy ball with friction are also made. In both cases, adaptive (dynamic) choices of the so(More)
In the most basic application of Ant Colony Optimization (ACO), a set of artificial ants find the shortest path between a source and a destination. Ants deposit pheromone on paths they take, preferring paths that have more pheromone on them. Since shorter paths are traversed faster, more pheromone accumulates on them in a given time, attracting more ants(More)
This paper formalizes a general technique to combine different methods in the solution of large systems of nonlinear equations using parallel asynchronous implementations on distributed-memory multiprocessor systems. Such combinations of methods, referred to as Team Algorithms, are evaluated as a way of obtaining desirable properties of different methods(More)
— A gradient system with discontinuous righthand side that solves an underdetermined system of linear equations in the L1 norm is presented. An upper bound estimate for finite time convergence to a solution set of the system of linear equations is shown by means of the Persidskii form of the gradient system and the corresponding non-smooth diagonal type(More)
It is shown that the assumption of D-stability of the interconnection matrix, together with the standard assumptions on the activation functions, guarantee the existence of a unique equilibrium under a synchronous mode of operation as well as a class of asynchronous modes. For the synchronous mode, these assumptions are also shown to imply local asymptotic(More)
— The standard conjugate gradient (CG) method uses orthogonality of the residues to simplify the formulas for the parameters necessary for convergence. In adaptive filtering, the sample-by-sample update of the correlation matrix and the cross-correlation vector causes a loss of the residue orthogonality in a modified online algorithm, which, in turn,(More)