• Corpus ID: 15641244

The Surprise-Based Learning Algorithm

@inproceedings{Ranasinghe2008TheSL,
  title={The Surprise-Based Learning Algorithm},
  author={N. Ranasinghe and T. Wei-MinShen},
  year={2008}
}
— This paper presents a learning algorithm known as surprise-based learning (SBL) capable of providing a physical robot the ability to autonomously learn and plan in an unknown environment without any prior knowledge of its actions or their impact on the environment. This is achieved by creating a model of the environment using prediction rules. A prediction rule describes the observations of the environment prior to the execution of an action and the forecasted or predicted observation of the… 

Figures and Tables from this paper

Surprise-Based Learning for Developmental Robotics
  • N. Ranasinghe, Wei-Min Shen
  • Computer Science
    2008 ECSIS Symposium on Learning and Adaptive Behaviors for Robotic Systems (LAB-RS)
  • 2008
This paper presents a learning algorithm called surprise-based learning (SBL) capable of providing a physical robot the ability to autonomously learn and plan in an unknown environment without any
Surprise-Based Learning for Autonomous Systems
Abstract : Dealing with unexpected situations is a key challenge faced by autonomous robots. This paper describes a promising solution for this challenge called Surprise-Based Learning (SBL). in
Surprise-based developmental learning and experimental results on robots
TLDR
This paper describes a promising approach in which a learner robot engages in a cyclic learning process consisting of “prediction, action, observation, analysis, analysis (of surprise) and adaptation”.
A modular robot architecture capable of learning to move and be automatically reconfigured
TLDR
Results show that well coordinated movements turn out to be very important for controllers using sensors to improve when being adapted, and play a major part in allowing a robot to adapt to move in different environments and be improved.
Self-Reconfigurable Robots for Adaptive and Multifunctional Tasks
Abstract : Self-reconfigurable modular robots are metamorphic systems that can autonomously change their logical or physical configurations (such as shapes, sizes, or formations), as well as their

References

SHOWING 1-10 OF 23 REFERENCES
Map Learning with Uninterpreted Sensors and Effectors
Evolving internal reinforcers for an intrinsically motivated reinforcement-learning robot
TLDR
A hierarchical reinforcement-learning architecture that exploits evolutionary robotics techniques and uses neural networks so to allow the system to autonomously discover "salient events" and to cope with continuous states and noisy environments is proposed.
Rule Creation and Rule Learning Through Environmental Exploration
TLDR
This paper reports an approach in which exploration, rule creation and rule learning are coordinated in a single framework, which creates STRIPS-Iike rules by noticing the changes in the environment when actions are taken, and later refines the rules by explaining the failures of their predictions.
Discovery as Autonomous Learning from the Environment
Discovery involves collaboration among many intelligent activities. However, little is known about how and in what form such collaboration occurs. In this article, a framework is proposed for
Towards autonomous sensor and actuator model induction on a mobile robot
TLDR
This article presents a novel methodology for a robot to autonomously induce models of its actions and sensors called ASAMI (autonomous sensor and actuator model induction), and shows how a robot can induce action and sensor models without any well-calibrated feedback.
Intrinsically Motivated Reinforcement Learning: A Promising Framework for Developmental Robot Learning
TLDR
This paper suggests that with its emphasis on task-general, self-motivated, and hierarchical learning, intrinsically motivated reinforcement learning is an obvious choice for organizing behavior in developmental robotics.
Novelty and Reinforcement Learning in the Value System of Developmental Robots
The value system of a developmental robot signals the occurrence of salient sensory inputs, modulates the mapping from sensory inputs to action outputs, and evaluates candidate actions. In the work
Reinforcement Learning: A Survey
TLDR
Central issues of reinforcement learning are discussed, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Neo: learning conceptual knowledge by sensorimotor interaction with an environment
TLDR
It is shown how classes (categories) can be abstracted from these representations, and how this representation might be extended to express physical schemas, general, domain-independent activities that could be the building blocks of concept formation.
Evolutionary robotics: The biology, intelligence, and technology of self-organizing machines
TLDR
A new book enPDFd evolutionary robotics the biology intelligence and technology of self organizing machines intelligent robotics and autonomous agents series to read.
...
1
2
3
...