Circular backpropagation networks for classification

@article{Ridella1997CircularBN,
  title={Circular backpropagation networks for classification},
  author={Sandro Ridella and Stefano Rovetta and Rodolfo Zunino},
  journal={IEEE transactions on neural networks},
  year={1997},
  volume={8 1},
  pages={
          84-97
        }
}
The class of mapping networks is a general family of tools to perform a wide variety of tasks. This paper presents a standardized, uniform representation for this class of networks, and introduces a simple modification of the multilayer perceptron with interesting practical properties, especially well suited to cope with pattern classification tasks. The proposed model unifies the two main representation paradigms found in the class of mapping networks for classification, namely, the surface… 
CBP networks as a generalized neural model
TLDR
The proposed model unifies the two main representation paradigms found in the class of mapping networks for classification, namely, the surface-based and the prototype-based schemes, while retaining the advantage of being trainable by back-propagation.
Adaptive RBF neural networks for pattern classifications
  • G. Daqi, Yang Genxing
  • Computer Science
    Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)
  • 2002
TLDR
A classification application shows that the proposed adaptive algorithm is able to optimally determine the structures and parameters of the RBF-LBF networks in accordance with the characteristics of sample distribution, has higher convergence rate and classification precision as well as many other advantages, compared with the feedforward two-layered LBF and RBF networks.
Classification, Association and Pattern Completion using Neural Similarity Based Methods
A framework for Similarity-Based Methods (SBMs) includes many classification models as special cases: neural network of the Radial Basis Function Networks type, Feature Space Mapping neurofuzzy
Representation and generalization properties of class-entropy networks
TLDR
The paper proves several theoretical properties about the performance of CCE-based networks, and considers both convergence during training and generalization ability at run-time, and proposes analytical criteria and practical procedures to enhance the generalization performance of the trained networks.
Neural Networks from Similarity Based Perspective
TLDR
A framework for Similarity-Based Methods (SBMs) includes many neural network models as special cases, useful not only for classification and approximation, but also as associative memories, in problems requiring pattern completion, offering an efficient way to deal with missing values.
Enhancing the Generalization Ability of Backpropagation Algorithm through Controlling the Outputs of the Hidden Layers
TLDR
The proposed algorithm provides better generalization results than the basic backpropagation algorithm through controlling the outputs of the hidden layers and some conventional regularization methods, such as Laplace and Gaussian regularizer.
From multilayer perceptrons to radial basis function networks: a comparative study
  • S. Ding, C. Xiang
  • Computer Science
    IEEE Conference on Cybernetics and Intelligent Systems, 2004.
  • 2004
A special additional input, which is the sum of the squares of the other inputs, is added to the standard multilayer perceptron, so that the multilayer perceptron works similarly as the radial basis
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 47 REFERENCES
Generalization and PAC learning: some new results for the class of generalized single-layer networks
TLDR
It is shown that the use of self-structuring techniques for GSLNs may reduce the number of training examples sufficient to guarantee good generalization performance, and an explanation for the fact that GSLNs can require a relatively large number of weights is provided.
On the Relationship between Generalization Error, Hypothesis Complexity, and Sample Complexity for Radial Basis Functions
TLDR
This article shows that the generalization error can be decomposed into two terms: the approximation error, due to the insufficient representational capacity of a finite sized network, and the estimation error,due to insufficient information about the target function because of the finite number of samples.
Neural Networks for Pattern Recognition
Neural networks for pattern recognition
TLDR
This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Bounds on the number of hidden neurons in multilayer perceptrons
TLDR
A least upper bound is derived for the number of hidden neurons needed to realize an arbitrary function which maps from a finite subset of E(n) into E(d) and a nontrivial lower bound is obtained for realizations of injective functions.
Boosting the Performance of RBF Networks with Dynamic Decay Adjustment
TLDR
The Dynamic Decay Adjustment (DDA) algorithm is introduced which utilizes the constructive nature of the P-RCE algorithm together with independent adaptation of each prototype's decay factor and is class dependent and distinguishes between different neighbours.
Automatic Capacity Tuning of Very Large VC-Dimension Classifiers
TLDR
It is shown that even high-order polynomial classifiers in high dimensional spaces can be trained with a small amount of training data and yet generalize better than classifiers with a smaller VC-dimension.
An introduction to computing with neural nets
TLDR
This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Counting Function Theorem for Multi-Layer Networks
We show that a randomly selected N-tuple x→ of points of Rn with probability > 0 is such that any multi-layer percept ron with the first hidden layer composed of h1 threshold logic units can
Fast Learning in Networks of Locally-Tuned Processing Units
We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken
...
1
2
3
4
5
...