Sultan Uddin Ahmed

Learn More
Chaos appears in many natural and artificial systems; accordingly, we propose a method that injects chaos into a supervised feed forward neural network (NN). The chaos is injected simultaneously in the learnable temperature coefficient of the sigmoid activation function and in the weights of the NN. This is functionally different from the idea of noise(More)
This paper presents a pruning method for artificial neural networks (ANNs) based on the 'Lempel-Ziv complexity' (LZC) measure. We call this method the 'silent pruning algorithm' (SPA). The term 'silent' is used in the sense that SPA prunes ANNs without causing much disturbance during the network training. SPA prunes hidden units during the training process(More)
In this paper, a faster supervised algorithm (BPfast) for the neural network training is proposed that maximizes the derivative of sigmoid activation function during back-propagation (BP) training. BP adjusts the weights of neural network with minimizing an error function. Due to the presence of derivative information in the weight update rule, BP goes to(More)
Multilayer feed-forward neural network is widely used based on minimization of an error function. Back propagation is a famous training method used in the multilayer networks but it often suffers from the problems of local minima and slow convergence. These problems take place due to the gradient behavior of mostly used sigmoid activation function (SAF).(More)
A neural representation of combinational logic circuit is proposed, called ‘Logical Neural Network’ (LNN). LNN is a feed-forward neural network (NN) where the weights of the network indicate the connections of digital circuit. The logic operations of the circuit such as AND, OR, NOR etc are performed with the neurons of LNN. A modification of(More)
It is important to study the neural network (NN) when it falls into chaos, because brain dynamics involve chaos. In this paper, the several chaotic behaviors of supervised neural networks using Hurst Exponent (H), fractal dimension (FD) and bifurcation diagram are studied. The update rule for NN trained with back-propagation (BP) algorithm absorbs the(More)
Local minimum is incorporated problem in neural network (NN) training. To alleviate this problem, a modification of standard backpropagation (BP) algorithm, called BPCL for training NN is proposed. When local minimum arrives in the training, the weights of NN become idle. If the chaotic variation of learning rate (LR) is included during training, the weight(More)
This paper presents a Lempel Ziv Complexity (LZC) based pruning algorithm, called Silent Pruning Algorithm (SPA), for designing artificial neural networks (ANNs). This algorithm prunes hidden neurons during the training process of ANNs according to their ranks computed with LZC. LZC extracts the number of unique patterns in a time sequence as a measure of(More)
  • 1