• Corpus ID: 231846703

Symbolic Behaviour in Artificial Intelligence

@article{Santoro2021SymbolicBI,
  title={Symbolic Behaviour in Artificial Intelligence},
  author={Adam Santoro and Andrew Kyle Lampinen and Kory Wallace Mathewson and Timothy P. Lillicrap and David Raposo},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.03406}
}
The human ability to use symbols has yet to be replicated in machines. Bridging the gap requires considering how symbol meaning is established: if it is symbol users who agree-upon symbol meaning, then symbol-use comprises behaviours that navigate agreements about meaning. We leverage this insight to articulate graded symbolic behaviours, including constructing new symbols, altering prior ones, and introspecting about meaning and reasoning processes. We then evaluate contemporary AI methods… 

Intensional Artificial Intelligence: From Symbol Emergence to Explainable and Empathetic AI

It is argued that an explainable artificial intelligence must possess a rationale for its decisions, be able to infer the purpose of observed behaviour, and be can to explain its decisions in the context of what its audience understands and intends, and a theory of meaning is proposed in which an agent should model the world a language describes rather than the language itself.

Language and culture internalization for human-like autotelic AI

This work proposes Vygotskian autotelic agents — agents able to internalise their interactions with others and turn them into cognitive tools and focuses on language and shows how its structure and informational content may support the development of new cognitive functions in artificial agents as it does in humans.

Explanatory Learning: Beyond Empiricism in Neural Networks

Odeen, a basic EL environment that simulates a small flatland-style universe full of phenomena to explain, is introduced, and it is shown how CRNs outperform empiricist end-to-end approaches of similar size and architecture (Transformers) in discovering explanations for novel phenomena.

Meta-Referential Games to Learn Compositional Learning Behaviours

A novel benchmark is proposed to investigate agents’ abilities to exhibit CLBs by solving a domain-agnostic version of the BP, and a meta-learning extension of referential games is proposed, entitled Meta-Referential Games, to build this benchmark, that is named Symbolic Behaviour Benchmark (S2B).

Meaning without reference in large language models

The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning

Language models show human-like content effects on reasoning

This work hypothesized that language models would show human-like content content on abstract reasoning problems, and explored this hypothesis across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task.

Relational reasoning and generalization using non-symbolic neural networks

This work finds neural networks are able to learn basic equality (mathematical identity), sequential equality problems, and a complex, hierarchical equality problem with only basic equality training instances ("zero-shot" generalization).

Symbol Emergence and The Solutions to Any Task

It is argued that an agent which always constructs what is called an Intensional Solution would qualify as artificial general intelligence, and how natural language may emerge and be acquired by such an agent, conferring the ability to model the intent of other individuals labouring under similar compulsions.

Towards Teachable Autonomous Agents

The purpose of this paper is to elucidate the key obstacles standing in the way towards the design of teachable and autonomous agents and focus on autotelic agents, i.e. agents equipped with forms of intrinsic motivations that enable them to represent, self-generate and pursue their own goals.

Tell me why! - Explanations support learning of relational and causal structure

It is shown that language can help agents learn challenging relational tasks, and which aspects of language contribute to its benefits are examined, which suggest that language description and explanation may be powerful tools for improving agent learning and generalization.

References

SHOWING 1-10 OF 139 REFERENCES

Situated Action: A Symbolic Interpretation

It is proposed that the goals set forth by the proponents of SA can be attained only within the framework of symbolic systems, and the main body of empirical evidence supporting this view resides in the numerous symbol systems constructed in the past 35 years that have successfully simulated broad areas of human cognition.

Symbol Emergence in Cognitive Developmental Systems: A Survey

The notion of a symbol in semiotics from the humanities is introduced, to leave the very narrow idea of symbols in symbolic AI and the challenges facing the creation of cognitive systems that can be part of symbol emergence systems.

Emergence in Cognitive Science

The study of human intelligence was once dominated by symbolic approaches, but over the last 30 years an alternative approach has arisen, and a wide range of constructs in cognitive science can be understood as emergents.

Analysing Mathematical Reasoning Abilities of Neural Models

This paper conducts a comprehensive analysis of models from two broad classes of the most powerful sequence-to-sequence architectures and finds notable differences in their ability to resolve mathematical problems and generalize their knowledge.

Placing language in an integrated understanding system: Next steps toward human-level performance in neural language models

This work describes the organization of the brain’s distributed understanding system, which includes a fast learning system that addresses the memory problem and sketches a framework for future models of understanding drawing equally on cognitive neuroscience and artificial intelligence and exploiting query-based attention.

Imitating Interactive Intelligence

The results in this virtual environment provide evidence that large-scale human behavioural imitation is a promising tool to create intelligent, interactive agents, and the challenge of reliably evaluating such agents is possible to surmount.

Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning

This work seeks a lightweight, training-free means of improving existing System 1-like sequence models by adding System 2-inspired logical reasoning and shows that this approach can increase the coherence and accuracy of neurally-based generations.

Grounded Language Learning in a Simulated 3D World

An agent is presented that learns to interpret language in a simulated 3D environment where it is rewarded for the successful execution of written instructions and its comprehension of language extends beyond its prior experience, enabling it to apply familiar language to unfamiliar situations and to interpret entirely novel instructions.

Representation without symbol systems

A different approach to understanding psychological processes is explored, one that retains a commitment to the idea that the brain uses symbols to store and use information.

Convention-formation in iterated reference games

Results from a large-scale, multi-player replication of the classic tangrams task are presented, focusing on three foundational properties of conventions: arbitrariness, stability, and reduction of utterance length over time, which motivate a theory of convention-formation where agents assume others are using language with such knowledge.
...