Paolo Campolucci

Learn More
This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation(More)
In this paper, a new complex-valued neural network based on adaptive activation functions is proposed. By varying the control points of a pair of Catmull–Rom cubic splines, which are used as an adaptable activation function, this new kind of neural network can be implemented as a very simple structure that is able to improve the generalization capabilities(More)
Linear recursive filters can be adapted on-line but with instability problems. Stability-control techniques exist, but they are either computationally expensive or nonrobust. For the nonlinear case, e.g., locally recurrent neural networks, the stability of infinite-impulse response (IIR) synapses is often a condition to be satisfied. This brief considers(More)
In this paper we derive two second-order algorithms, based on conjugate gradient, for on-line training of recurrent neural networks. These azgorithms use two different techniques to extract second-order information on the Hessian matrix without calculating or storing it and without making numericaz approximations. Several simulation results for non-linear(More)
This paper concerns dynamic neural networks for signal processing: architectural issues are considered but the paper focuses on learning algorithms that work on-line. Locally recurrent neural networks, namely MLP with IIR synapses and generalization of Local Feedback MultiLayered Networks (LF MLN), are compared to more traditional neural networks, i.e.(More)
In this paper, making use of the Signal-Flow-Graph (SFG) representation and its known properties, we derive a new general method for backward gradient computation of a system output or cost function with respect to past (or present) system parameters. The system can be any causal, in general non-linear and time-variant, dynamic system represented by a SFG,(More)
This paper is focused on the learning algorithms for dynamic multilayer perceptron neural networks where each neuron synapsis is modelled by an infinite impulse response (IIR) filter (IIR MLP). In particular, the Backpropagation Through Time (BPTT) algorithm and its less demanding approximated on-line versions are considered. In fact it is known that the(More)
In this paper we propose a new learning algorithm for locally recurrent neural networks, called Truncated Recursive Back Propagation which can be easily implemented on-line with good performance. Moreover it generalises the algorithm proposed by Waibel et al. for TDNN, and includes the Back and Tsoi algorithm as well as BPS and standard on-line Back(More)
In this paper, we derive a new general method for both on-line and off-line backward gradient computation of a system output, or cost function, with respect to system parameters, using a circuit theoretic approach. The system can be any causal, in general nonlinear and time-variant, dynamic system represented by a Signal Flow Graph, in particular any(More)
In this paper, we study the properties of neural networks based on adaptive spline activation functions (ASNN). Using the results of regularization theory, we show how the proposed architecture is able to produce smooth approximations of unknown functions; to reduce hardware complexity a particular implementation of the kernels expected by the theory is(More)