Learning Fast and Slow: Levels of Learning in General Autonomous Intelligent Agents

@inproceedings{Laird2018LearningFA,
  title={Learning Fast and Slow: Levels of Learning in General Autonomous Intelligent Agents},
  author={John E. Laird and Shiwali Mohan},
  booktitle={AAAI},
  year={2018}
}
We propose two distinct levels of learning for general autonomous intelligent agents. Level 1 consists of fixed architectural learning mechanisms that are innate and automatic. Level 2 consists of deliberate learning strategies that are controlled by the agent's knowledge. We describe these levels and provide an example of their use in a task-learning agent. We also explore other potential levels and discuss the implications of this view of learning for the design of autonomous agents.  
Integrating Declarative Long-Term Memory Retrievals into Reinforcement Learning
TLDR
A framework in which agents that follow the common model of cognition can learn to retrieve from LTM based only on task rewards, and use the resulting knowledge to select actions, to speed up learning when new entities are encountered.
Action Selection and Execution in Everyday Activities: A Cognitive Robotics and Situation Model Perspective
TLDR
This work shows how the CRAM cognitive architecture allows a robot to carry out simple and complex activities such as laying a table for a meal and loading a dishwasher afterward by using generalized action plans that exploit reasoning with modular, composable knowledge chunks representing general knowledge.
Conversational Learning
TLDR
It is argued that this second paradigm – conversational machine learning – is ripe for rapid research progress, and that it holds the potential to make it possible for every user of a computer or mobile device to become a programmer.
Causal cognitive architecture 1: Integration of connectionist elements into a navigation-based framework
Evaluation of Cognitive Architectures for Cyber-Physical Production Systems
TLDR
This paper analysis existing reference architecture regarding their cognitive abilities, based on requirements that are derived from three different use cases, and reveals a gap in the applicability of the architectures regarding the generalizability and the level of abstraction.
Differentiable programming: Generalization, characterization and limitations of deep learning
TLDR
This paper define and motivate differentiable programming, as well as specify some program characteristics that allow to incorporate the structure of the problem in a differentiable program.

References

SHOWING 1-10 OF 16 REFERENCES
Learning Goal-Oriented Hierarchical Tasks from Situated Interactive Instruction
TLDR
This paper frames acquisition of novel tasks as an explanation-based learning (EBL) problem and proposes an interactive learning variant of EBL for a robotic agent and shows that the approach can exploit information in situated instructions along with the domain knowledge to demonstrate fast generalization on several tasks.
Interactively Learning a Blend of Goal-Based and Procedural Tasks
TLDR
This work presents a hybrid approach to interactive task learning that can learn both goal-oriented and procedural tasks, and mixtures of the two, from human natural language instruction and is robust to different amounts of domain knowledge.
The Soar Cognitive Architecture
  • J. Laird
  • Computer Science, Psychology
  • 2012
TLDR
This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components, and proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
Interactive Task Learning
This article presents a new research area called interactive task learning (ITL), in which an agent actively tries to learn not just how to perform a task better but the actual definition of a task
Implicit learning.
  • C. Seger
  • Psychology, Biology
    Psychological bulletin
  • 1994
TLDR
The dependence of implicit learning on particular brain areas is discussed, some conclusions are drawn for modeling implicit learning, and the interaction of implicit and explicit learning is considered.
Thinking, Fast and Slow
Daniel Kahneman, recipient of the Nobel Prize in Economic Sciences for his seminal work in psychology challenging the rational model of judgment and decision making, is one of the world's most
Habituation: a dual-process theory.
A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics
TLDR
A key foundational hypothesis in artificial intelligence is that minds are computational entities of a special sort — that is, cognitive systems — that can be implemented through a diversity of physical devices, whether natural brains, traditional generalpurpose computers, or other sufficiently functional forms of hardware or wetware.
Make It Stick: The science of successful learning
Six Strategies for Effective Learning.
...
...