Xiaoshuai Ding

  • Citations Per Year
Learn More
This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform(More)
The analysis of finite-time stability for a class of fractional-order complex valued neural networks with delays is considered in this paper. Utilizing Gronwall inequality, Cauchy-Schiwartz inequality and inequality scaling techniques, some sufficient conditions for guaranteeing the finite-time stability of the system are derived respectively under two(More)
Recurrent neural networks have been used for analysis and prediction of time series. This paper is concerned with the convergence of the gradient descent algorithm for training the diagonal recurrent neural networks. The existing convergence results consider the online gradient training algorithm based on the assumption that a very large number of (or(More)
In order to improve accuracy of fault diagnosis based on SVMs, an improved algorithm of support vector domain description (ISVDD) is proposed, used to pretreat the fault data. ISVDD constructs the recognizer of fault data by introducing an optimal sphere instead of the minimum sphere. The recognizer can sift out the fault data belonging to new unknown fault(More)
This paper is concerned with the drive-response synchronization for a class of fractional-order bidirectional associative memory neural networks with time delays, as well as in the presence of discontinuous activation functions. The global existence of solution under the framework of Filippov for such networks is firstly obtained based on the fixed-point(More)
This paper investigates a gradient descent algorithm with penalty for a recurrent neural network. The penalty we considered here is a term proportional to the norm of the weights. Its primary roles in the methods are to control the magnitude of the weights. After proving that all of the weights are automatically bounded during the iteration process, we also(More)
  • 1