Learn More
It is well-known that high gain observers exist for nonlinear systems that are uniformly observable and globally Lipschitzian. Under the same conditions, we show that these systems admit semi-global and finite-time converging observers. This is achieved with a derivation of a new sufficient condition for local finite-time stability, in conjunction with(More)
In this paper, a global finite-time observer is designed for a class of nonlinear systems with non-Lipschitz conditions. Compared with the previous results, the observer designed in this paper is proposed with a new gain update law. By two examples, we show that the proposed observer can reduce the time of the observation error convergence.
This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural(More)
In this paper, finite time dual neural networks with a new activation function are presented to solve quadratic programming problems. The activation function has two tunable parameters, which give more flexibility to design a neural network. By Lyapunov theorem, the finite-time stability can be derived for the proposed neural networks model, and the actual(More)
This paper investigates finite-time stability and its application for solving time-varying Sylvester equation by recurrent neural network. Firstly, a new finite-time stability criterion is given and a less conservative upper bound of the convergence time is also derived. Secondly, a sign-bi-power activation function with a linear term is presented for the(More)
This paper presents a modified structure of a neural network with tunable activation function and provides a new learning algorithm for the neural network training. Simulation results of XOR problem, Feigenbaum function, and Henon map show that the new algorithm has better performance than BP (back propagation) algorithm in terms of shorter convergence time(More)