Large Patterns Make Great Symbols: An Example of Learning from Example

@inproceedings{Kanerva1998LargePM,
  title={Large Patterns Make Great Symbols: An Example of Learning from Example},
  author={Pentti Kanerva},
  booktitle={Hybrid Neural Systems},
  year={1998}
}
  • P. Kanerva
  • Published in Hybrid Neural Systems 4 December 1998
  • Mathematics, Computer Science
We look at distributed representation of structure with variable binding, that is natural for neural nets and that allows traditional symbolic representation and processing. The representation supports learning from example. This is demonstrated by taking several instances of the mother-of relation implying the parent-of relation, by encoding them into a mapping vector, and by showing that the mapping vector maps new instances of mother-of into parent-of. Possible implications to AI are… Expand

Figures and Topics from this paper

Analogical mapping and inference with binary spatter codes and sparse distributed memory
TLDR
A novel VSA network for the analogical mapping of compositional structures is studied, which integrates an associative memory known as sparse distributed memory (SDM) and it is found that non-commutative binding requires sparse activation of the SDM and that 10-20% concept-specific activation of neurons is optimal. Expand
Holistic processing of hierarchical structures in connectionist networks
TLDR
The ability to distinguish and perform a number of different structure-sensitive operations is one step towards a connectionist architecture that is capable of modelling complex high-level cognitive tasks such as natural language processing and logical inference. Expand
Resonator networks for factoring distributed representations of data structures
TLDR
This work proposes an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple code vectors through a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. Expand
Analogical Mapping with Sparse Distributed Memory: A Simple Model that Learns to Generalize from Examples
TLDR
The model can learn analogical mappings of generic two-place relationships, and the error probabilities for recall and generalization are calculated and indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of theMemory is important. Expand
Some approaches to analogical mapping with structure-sensitive distributed representations
TLDR
This paper presents some techniques for analogical mapping using associative-projective neural networks (APNNs) to encode both surface and structural similarity of analogical episodes. Expand
Imitation of honey bees’ concept learning processes using Vector Symbolic Architectures
TLDR
There is a class of simple artificial systems that reproduce the learning behaviors of certain living organisms without requiring the implementation of computationally intensive cognitive architectures, and it is possible in some cases to implement rather advanced cognitive behavior using simple techniques. Expand
Resonator circuits: a neural network for e ciently solving factorization problems
For the brain to make sense of knowledge, it must not only be able to represent features about sensory inputs, but also it must be able to store and manipulate this information within dataExpand
Hybrid Neural Systems
TLDR
An overview of Hybrid Neural Systems and Lessons from Past, Current Issues, and Future Research Directions in Extracting the Knowledge Embedded in Artificial Neural Networks are presented. Expand
Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures
TLDR
This work proposes an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple codevectors. Expand
Randomly connected sigma–pi neurons can form associator networks
  • T. Plate
  • Computer Science, Medicine
  • Network
  • 2000
TLDR
A set of sigma–pi units randomly connected to two input vectors forms a type of hetero-associator related to convolution- and matrix-based associative memories, which encodes information in activation values rather than in weight values, which makes the information about relationships accessible to further processing. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 16 REFERENCES
Recursive Distributed Representations
TLDR
This paper presents a connectionist architecture which automatically develops compact distributed representations for variable-sized recursive data structures, as well as efficient accessing mechanisms for them. Expand
Features of distributed representations for tree-structures: A study of RAAM
TLDR
An in-depth analysis of properties of patterns, generated by the Recursive Auto-Associative Memory, based on the idea that representational features can be detected by a classification network shows that the structure supplied during training is maintained and is extractable from the generated pattern. Expand
Syntactic Transformations on Distributed Representations
TLDR
This paper explores the possibility of moving beyond implementation by exploiting holistic structure-sensitive operations on distributed representations using Pollack’s Recursive Auto-Associative Memory, demonstrating that the implicit structure present in these representations can be used for a kind of structure- sensitive processing unique to the connectionist domain. Expand
Distributed representations and nested compositional structure
TLDR
This thesis proposes a method for the distributed representation of nested structure in connectionist representations and shows that it is possible to use dot-product comparisons of HRRs for nested structures to estimate the analogical similarity of the structures. Expand
Distributed representations of structure: A theory of analogical access and mapping.
TLDR
An integrated theory of analogical access and mapping, instantiated in a computational model called LISA (Learning and Inference with Schemas and Analogies), suggesting that the architecture of LISA can provide computational explanations of properties of the human cognitive architecture. Expand
From simple associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal synchrony
Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency – as though these inferences were a reflexive response of their cognitive apparatus. Furthermore,Expand
Mapping Part-Whole Hierarchies into Connectionist Networks
TLDR
Three different ways of mapping part-whole hierarchies into connectionist networks are described, suggesting that neural networks have two quite different methods for performing inference. Expand
Hybrid Approaches to Neural Network-based Language Processing
TLDR
It is argued that the hybrid approach to arti cial neural network-based language processing has a lot of potential to overcome the gap between a neural level and a symbolic conceptual level. Expand
Binary Spatter-Coding of Ordered K-Tuples
TLDR
This paper describes how spatter coding leads to binary HRRs, and how the fields of a record are encoded into a long binary word without fields and how they are extracted from such a word. Expand
A Common Framework for Distributed Representation Schemes for Compositional Structure
Over the last few years a number of schemes for encoding compositional structure in distributed representations have been proposed, e.g., Smolensky's tensor products, Pollack's RAAMs, Plate's HRRs,Expand
...
1
2
...