Corpus ID: 2405482

The Comparison and Combination of Genetic and Gradient Descent Learning in Recurrent Neural Networks: An Application to Speech Phoneme Classification

@inproceedings{Chandra2007TheCA,
  title={The Comparison and Combination of Genetic and Gradient Descent Learning in Recurrent Neural Networks: An Application to Speech Phoneme Classification},
  author={Rohitash Chandra and C. Omlin},
  booktitle={Artificial Intelligence and Pattern Recognition},
  year={2007}
}
We present a training approach for recurrent neural networks by combing evolutionary and gradient descent learning. We train the weights of the network using genetic algorithms. We then apply gradient descent learning on the knowledge acquired by genetic training to further refine the knowledge. We also use genetic neural learning and gradient descent learning for training on the same network topology for comparison. We apply these training methods to the application of speech phoneme… Expand
Problem Decomposition and Adaptation in Cooperative Neuro-evolution
TLDR
New forms of problem decomposition are suggested, based on a novel and intuitive choice of modularity, that improve performance in terms of optimization time, scalability and robustness and are applied for training recurrent neural networks on chaotic time series problems. Expand
Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers
TLDR
This paper proposes a combining method of the dialectical optimization method (DOM) and the gradient descent technique for solving this optimization problem and shows that the proposed algorithm has higher classification rates with respect to single gradient descent method and the combination of gradient descent with genetic algorithms. Expand
Building Subcomponents in the Cooperative Coevolution Framework for Training Recurrent Neural Networks: Technical Report
TLDR
A new encoding scheme for building subcomponents which is based on the functional properties of a neuron is proposed and compared with the best encoding scheme from literature and demonstrates to learn from strings lengths of up to 500 time lags. Expand
COVID-19 sentiment analysis via deep learning during the rise of novel cases
TLDR
The results indicate that the majority of the tweets have been positive with high levels of optimism during the rise of the novel COVID-19 cases and the number of tweets significantly lowered towards the peak, and the predictions generally indicate that although the majority have been optimistic, a significant group of population has been annoyed towards the way the pandemic was handled by the authorities. Expand
A meta-heuristic paradigm for solving the forward kinematics of 6–6 general parallel manipulator
The forward kinematics of the general Gough platform, namely the 6–6 parallel manipulator is solved using hybrid meta-heuristic techniques in which the simulated annealing algorithm replaces theExpand
Hybrid MetaHeuristic Paradigm for the Forward Kinematics of 6-6 General Parallel Manipulator
The forward kinematics of the 6-6 leg parallel manipulator is solved using hybrid meta-heuristic techniques in which the simulated annealing algorithm replaces the mutation operator in a geneticExpand
Solving the forward kinematics of the 3RPR planar parallel manipulator using a hybrid meta-heuristic paradigm
The forward kinematic of the 3-RPR parallel manipulator is solved using a hybrid meta-heuristic technique where the simulated annealing algorithm replaces the mutation operator in a geneticExpand

References

SHOWING 1-10 OF 20 REFERENCES
Pruning recurrent neural networks for improved generalization performance
TLDR
This work presents a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks and shows that rules extracted from networks trained with this heuristic are more consistent with the rules to be learned. Expand
Learning long-term dependencies in NARX recurrent neural networks
TLDR
It is shown that the long-term dependencies problem is lessened for a class of architectures called nonlinear autoregressive models with exogenous (NARX) recurrent neural networks, which have powerful representational capabilities. Expand
Adding learning to cellular genetic algorithms for training recurrent neural networks
TLDR
A hybrid optimization algorithm which combines the efforts of local search and cellular genetic algorithms (GA's) for training recurrent neural networks (RNN's) and it is concluded that learning should not be too extensive if the hybrid algorithm is to be benefit from learning. Expand
Improving the robustness of noisy MFCC features using minimal recurrent neural networks
  • I. Potamitis, N. Fakotakis, G. Kokkinakis
  • Computer Science
  • Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium
  • 2000
TLDR
A novel technique for improving speech recognition performance in the car environment for SNRs ranging from -10 to 20 dB is described, which results in neural networks of much smaller total number of weights than reported cases and consequently in faster training and execution performance. Expand
Learning long-term dependencies with gradient descent is difficult
TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods. Expand
Maximum likelihood linear transformations for HMM-based speech recognition
  • M. Gales
  • Computer Science
  • Comput. Speech Lang.
  • 1998
TLDR
The paper compares the two possible forms of model-based transforms: unconstrained, where any combination of mean and variance transform may be used, and constrained, which requires the variance transform to have the same form as the mean transform. Expand
First-Order Recurrent Neural Networks and Deterministic Finite State Automata
TLDR
The correspondence between first-order recurrent neural networks and deterministic finite state automata is examined in detail, showing two major stages in the learning process and a measure based on clustering that is correlated to the stability of the networks. Expand
An application of recurrent nets to phone probability estimation
  • A. J. Robinson
  • Computer Science, Medicine
  • IEEE Trans. Neural Networks
  • 1994
TLDR
Recognition results are presented for the DARPA TIMIT and Resource Management tasks, and it is concluded that recurrent nets are competitive with traditional means for performing phone probability estimation. Expand
Knowledge-Based Artificial Neural Networks
TLDR
These tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several techniques proposed by biologists. Expand
Equivalence in knowledge representation: automata, recurrent neural networks, and dynamical fuzzy systems
TLDR
Various knowledge equivalence representations between neural and fuzzy systems and models of automata are proved and the stability of fuzzy finite state dynamics of the constructed neural networks for finite values of network weight is proved. Expand
...
1
2
...