Corpus ID: 54446594

Modelling Identity Rules with Neural Networks

@article{Weyde2019ModellingIR,
  title={Modelling Identity Rules with Neural Networks},
  author={Tillman Weyde and Radha Manisha Kopparti},
  journal={ArXiv},
  year={2019},
  volume={abs/1812.02616}
}
In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstract patterns based on identity rules. We propose Relation Based Pattern (RBP) extensions to neural network structures that solve this problem and answer, as well as raise, questions about integrating structures for inductive bias into neural networks. Examples of abstract patterns are the sequence patterns ABA and ABB where A or B can be any object. These were introduced by Marcus et al (1999) who… Expand
Weight Priors for Learning Identity Relations
TLDR
This work extends RBP by realizing it as a Bayesian prior on network weights to model the identity relations and shows that the Bayesian weight priors lead to perfect generalization when learning identity based relations and do not impede general neural network learning. Expand
Relational Weight Priors in Neural Networks for Abstract Pattern Learning and Language Modelling
TLDR
Embedded Relation Based Patterns is proposed as a novel way to create a relational inductive bias that encourages learning equality and distance-based relations for abstract patterns and consistently improves over RBP and over standard networks, showing that it enables abstract pattern learning which contributes to performance in natural language tasks. Expand
The Next Big Thing(s) in Unsupervised Machine Learning: Five Lessons from Infant Learning
TLDR
It is argued that developmental science of infant cognition might hold the key to unlocking the next generation of unsupervised learning approaches, and five crucial factors enabling infants' quality and speed of learning are identified. Expand
Do graded representations support abstract thought?
Relational reasoning requires the reasoner to go beyond her/his specific experience, abstracting from items to make inferences about categories and kinds on the basis of structural or analogicalExpand
Factors for the Generalisation of Identity Relations by Neural Networks
TLDR
Various factors in the neural network architecture and learning process whether they make a difference to the generalisation on equality detection of Neural Networks without and and with DR units in early and mid fusion architectures are explored. Expand

References

SHOWING 1-10 OF 51 REFERENCES
Pre-Wiring and Pre-Training: What Does a Neural Network Need to Learn Truly General Identity Rules?
TLDR
It is argued that, in order to simulate human-like learning of grammatical rules, a neural network model should not be used as a tabula rasa, but rather, the initial wiring of the neural connections and the experience acquired prior to the actual task should be incorporated into the model. Expand
Measuring abstract reasoning in neural networks
TLDR
A dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test, is proposed and ways to both measure and induce stronger abstract reasoning in neural networks are introduced. Expand
A Spatiotemporal Connectionist Model of Algebraic Rule-learning
Recent experiments by Marcus, Vijaya, Rao, and Vishton suggest that infants are capable of extracting and using abstract algebraic rules such as \the rst item X is the same as the third item Y". SuchExpand
Generalization in a Model of Infant Sensitivity to Syntactic Variation
Computer simulations show that an unstructured neuralnetwork model (Shultz & Bale, 2001) covers the essential features of infant differentiation of simple grammars in an artificial language, andExpand
Neural Network Simulation of Infant Familiarization to Artificial Sentences: Rule-Like Behavior Without Explicit Rules and Variables.
  • T. Shultz, Alan C. Bale
  • Medicine, Computer Science
  • Infancy : the official journal of the International Society on Infant Studies
  • 2001
TLDR
The evidence, from these and other simulations, supports the view that unstructured neural networks can account for the existing infant data, and a variety of predictions suggest the utility of the model in guiding future psychological work. Expand
Neural network processing of natural language: I. Sensitivity to serial, temporal and abstract structure of language in the infant
Well before their first birthday, babies can acquire knowledge of serial order relations (Saffran et al., 1996a), as well as knowledge of more abstract rule-based structural relations (Marcus et al.,Expand
Learning Inductive Biases with Simple Neural Networks
TLDR
It is found that simple neural networks develop a shape bias after seeing as few as 3 examples of 4 object categories, and the development of these biases predicts the onset of vocabulary acceleration in networks, consistent with the developmental process in children. Expand
Finding Structure in Time
TLDR
A proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory and suggests a method for representing lexical categories and the type/token distinction is developed. Expand
Two Apparent ‘Counterexamples’ To Marcus: A Closer Look
TLDR
It is found that, at first blush, Shultz and Bale’s model (2001) replicated the infant’'s known data, but the model largely failed to learn the grammars, and serious problems were found with Altmann and Dienes’ model (1999), which fell short of matching any of the infant's results and of learning the syntactic structure of the input patterns. Expand
Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks
TLDR
This paper introduces the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences, and tests the zero-shot generalization capabilities of a variety of recurrent neural networks trained on SCAN with sequence-to-sequence methods. Expand
...
1
2
3
4
5
...