Supervised neural networks for the classification of structures

@article{Sperduti1997SupervisedNN,
  title={Supervised neural networks for the classification of structures},
  author={Alessandro Sperduti and Antonina Starita},
  journal={IEEE transactions on neural networks},
  year={1997},
  volume={8 3},
  pages={
          714-35
        }
}
Standard neural networks and statistical methods are usually believed to be inadequate when dealing with complex structures because of their feature-based approach. [] Key Method By using generalized recursive neurons, all the supervised networks developed for the classification of sequences, such as backpropagation through time networks, real-time recurrent networks, simple recurrent networks, recurrent cascade correlation networks, and neural trees can, on the whole, be generalized to structures. The…

Figures from this paper

Neural Networks for Processing Data Structures
  • A. Sperduti
  • Computer Science
    Summer School on Neural Networks
  • 1997
TLDR
It is not difficult to figure out how to extract tree automata from a neural network for structures, and this would allow the above scheme to work on the other side around, with a neural module which is driven by a symbolic subsystem.
A General Framework for Self-Organizing Structure Processing Neural Networks
TLDR
A general recursive dynamic is defined which enables the recursive processing of complex data structures based on recursively computed internal representations of the respective context and allows the transfer of theoretical issues from the SOM literature to the structure processing case.
Artificial Neural Network Models
TLDR
The main models and developments in the broad field of artificial neural networks (ANN) are outlined, including biological neurons motivates the initial formal neuron model – the perceptron, and the basic principles of training the corresponding ANN models on an appropriate data collection are outlined.
Entropy-based generation of supervised neural networks for classification of structured patterns
TLDR
An entropy-based approach for constructing such neural networks for classification of acyclic structured patterns and results have shown that the networks constructed by this method can have a better performance, with respect to network size, learning speed, or recognition accuracy, than the networks obtained by other methods.
Theoretical properties of recursive neural networks with linear neurons
TLDR
Some theoretical results about linear recursive neural networks are presented that allow one to establish conditions on their dynamical properties and their capability to encode and classify structured information.
A Simple and Effective Neural Model for the Classification of Structured Patterns
TLDR
The idea is to describe a graph as an algebraic relation, i.e. as a subset of the Cartesian product, and the class-posterior probabilities given the relation are reduced to products of probabilistic quantities estimated using a multilayer perceptron.
A general framework for adaptive processing of data structures
TLDR
The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured information, where relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist.
Inductive Learning in Symbolic Domains Using Structure-Driven Recurrent Neural Networks
TLDR
A connectionist architecture together with a novel supervised learning scheme which is capable of solving inductive inference tasks on complex symbolic structures of arbitrary size and first results from experiments with inductive learning tasks consisting in the classification of logical terms are given.
Learning Efficiently with Neural Networks: A Theoretical Comparison between Structured and Flat Representations
TLDR
The message of this paper is that, whenever structured representations are available, they should be preferred to "flat" (array based) representations because they are likely to simplify learning in terms of time complexity.
The loading problem for recursive neural networks
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 47 REFERENCES
Neural trees: a new tool for classification
TLDR
A new classifier based on neural network techniques, inspired from a growth algorithm, recently introduced for feedforward neural networks, which is easily and efficiently extended to classification in a multiclass probl...
Combining neural networks and decision trees
TLDR
Simulation results are presented on a speaker-independent vowel recognition task which show the superiority of the NTN approach over both MLPs and decision trees.
Entropy Nets: From Decision Trees to Neural Networks
TLDR
This paper shows how the mapping of decision trees into a multilayer neural network structure can be exploited for the systematic design of a class of layered neural networks, called entropy nets, that have far fewer connections.
Constructive learning of recurrent neural networks: limitations of recurrent cascade correlation and a simple solution
TLDR
This work proves that one method, recurrent cascade correlation, has fundamental limitations in representation and thus in its learning capabilities, and gives a "preliminary" approach on how to get around these limitations by devising a simple constructive training method.
The Cascade-Correlation Learning Architecture
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Learning Distributed Representations for the Classification of Terms
TLDR
The intended applications of the approach described in this paper are hybrid (symbolic/connectionist) systems, where the connectionist part has to solve logic-oriented inductive learning tasks similar to the term-classification problems used in the experiments.
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal
A performance comparison of trained multilayer perceptrons and trained classification trees
  • L. Atlas, J. Connor, Y. Muthusamy
  • Computer Science
    Conference Proceedings., IEEE International Conference on Systems, Man and Cybernetics
  • 1989
TLDR
There is not enough theoretical basis for the clear-cut superiority of one technique over the other in terms of classification and prediction outside the training set, and the authors are confident that the univariate version of the trained classification trees do not perform as well as the multilayer perceptron.
Stability properties of labeling recursive auto-associative memory
  • A. Sperduti
  • Computer Science
    IEEE Trans. Neural Networks
  • 1995
TLDR
The authors give sufficient conditions under which the property of asymptotical stability of a fixed point in one particular constrained version of the recurrent network can be extended to related fixed points in different constrained versions of the network.
A soft-competitive splitting rule for adaptive tree-structured neural networks
  • M. Perrone
  • Computer Science
    [Proceedings 1992] IJCNN International Joint Conference on Neural Networks
  • 1992
TLDR
It is demonstrated that this algorithm grows robust, honest estimators and it is shown that the tree outperforms backpropagation on a 10-class, 240-dimensional optical character recognition classification task.
...
1
2
3
4
5
...