A symbolic-connectionist theory of relational inference and generalization.

@article{Hummel2003AST,
  title={A symbolic-connectionist theory of relational inference and generalization.},
  author={John E. Hummel and Keith J. Holyoak},
  journal={Psychological review},
  year={2003},
  volume={110 2},
  pages={
          220-64
        }
}
The authors present a theory of how relational inference and generalization can be accomplished within a cognitive architecture that is psychologically and neurally realistic. Their proposal is a form of symbolic connectionism: a connectionist system based on distributed representations of concept meanings, using temporal synchrony to bind fillers and roles into relational structures. The authors present a specific instantiation of their theory in the form of a computer simulation model… 
A theory of the discovery and predication of relational concepts.
TLDR
The authors present a theory of how a psychologically and neurally plausible cognitive architecture can discover relational concepts from examples and represent them as explicit structures (predicates) that can take arguments (i.e., predicate them).
Relational Reasoning in a Neurally Plausible Cognitive Architecture
TLDR
The LISA model of analogical reasoning represents both relations and objects as patterns of activation distributed over semantic units, integrating these representations into propositional structures using synchrony of firing to provide an a priori account of the limitations of human working memory.
A Symbolic-Connectionist Model of Relation Discovery
TLDR
This work presents a theory of relation discovery instantiated in a symbolic-connectionist model, which learns structured representations of attributes and relations from unstructured distributed representations of objects by a process of comparison, and subsequently refines these representations through aprocess of mapping-based schema induction.
Reasoning about relations.
TLDR
A novel model-based theory of relational reasoning based on 5 principles that describes computer implementations of the theory and presents experimental results corroborating its main principle.
An emergent approach to analogical inference
TLDR
This work finds that analogical inference can emerge naturally and spontaneously from a relatively simple, error-driven learning mechanism without the need to posit any additional analogy-specific machinery.
A theory of relation learning and cross-domain generalization.
TLDR
The model's trajectory closely mirrors the trajectory of children as they learn about relations, accounting for phenomena from the literature on the development of children's reasoning and analogy making, and its ability to generalize between domains demonstrates the flexibility afforded by representing domains in terms of their underlying relational structure.
Learning Conceptual Hierarchies by Iterated Relational Consolidation
TLDR
A proposal for this sort of representation construction, founded on reinforcement learning to evaluate the predictive usefulness of higher-order relations, together with a mechanism of relational consolidation by which systems of relations (schemas) can be chunked into unitary entities.
Generative Inferences Based on Learned Relations.
TLDR
It is shown that a bottom-up model of relation learning, initially developed to discriminate between positive and negative examples of comparative relations, can be extended to make generative inferences and is able to make quasi-deductive transitive inferences.
Structural constraints and object similarity in analogical mapping and inference
Theories of analogical reasoning have viewed relational structure as the dominant determinant of analogical mapping and inference, while assigning lesser importance to similarity between individual
Analogical and category-based inference: a theoretical integration with Bayesian causal models.
TLDR
This work proposes a computational theory in the framework of Bayesian inference and test its predictions in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes.
...
...

References

SHOWING 1-10 OF 178 REFERENCES
Distributed representations of structure: A theory of analogical access and mapping.
TLDR
An integrated theory of analogical access and mapping, instantiated in a computational model called LISA (Learning and Inference with Schemas and Analogies), suggesting that the architecture of LISA can provide computational explanations of properties of the human cognitive architecture.
From simple associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal synchrony
TLDR
A computational model is described that takes a step toward addressing the cognitive science challenge and resolving the artificial intelligence paradox and shows how a connectionist network can encode millions of facts and rules involving n-ary predicates and variables and perform a class of inferences in a few hundred milliseconds.
Linguistic processes in deductive reasoning.
TLDR
It is proposed that reasoning is accomplished mainly through certain very general linguistic processes, the same mental operations as other types of reasoning problems.
Rethinking Eliminative Connectionism
TLDR
It is shown that the class of eliminative connectionist models that is currently popular cannot learn to extend universals outside the training space, and this limitation might be avoided through the use of an architecture that implements symbol manipulation.
The Proper Treatment of Symbols in a Connectionist Architecture
TLDR
Ezinative connectionism offers a direct challenge to the PSS hypothesis, thereby transforming the latter from an axiom of cognitive science into a controversial theoretical position, which has been vigorously Regardless of whether models based on distributed representations provide genuine alternatives to physical symbol systems, it is apparent that they have attractive properties as possible algorithmic accounts of cognition.
Induction: Processes of Inference, Learning, and Discovery
TLDR
Induction is the first major effort to bring the ideas of several disciplines to bear on a subject that has been a topic of investigation since the time of Socrates and is included in the Computational Models of Cognition and Perception Series.
Constraints on Analogical Inference
TLDR
This work suggests that people prefer to make inferences of information connected to systematic correspondences between domains, and that violations of one-to-one mapping can lead to inconsistent object substitutions in inference.
The role of textual coherence in incremental analogical mapping
The Algebraic Mind: Integrating Connectionism and Cognitive Science
TLDR
Gary Marcus outlines a variety of ways in which neural systems could be organized so as to manipulate symbols, and he shows why such systems are more likely to provide an adequate substrate for language and cognition than neural systems that are inconsistent with the manipulation of symbols.
Structure‐Mapping: A Theoretical Framework for Analogy*
...
...