• Corpus ID: 227305888

DeepSym: Deep Symbol Generation and Rule Learning from Unsupervised Continuous Robot Interaction for Planning

@article{Ahmetoglu2020DeepSymDS,
  title={DeepSym: Deep Symbol Generation and Rule Learning from Unsupervised Continuous Robot Interaction for Planning},
  author={Alper Ahmetoglu and M. Yunus Seker and Justus H. Piater and Erhan Oztop and Emre Ugur},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.02532}
}
Autonomous discovery of discrete symbols and rules from continuous interaction experience is a crucial building block of robot AI, but remains a challenging problem. Solving it will overcome the limitations in scalability, flexibility, and robustness of manually-designed symbols and rules, and will constitute a substantial advance towards autonomous robots that can learn and reason at abstract levels in open-ended environments. Towards this goal, we propose a novel and general method that finds… 
Classical Planning in Deep Latent Space
TLDR
The proposed Latplan, an unsupervised architecture combining deep learning and classical planning, is evaluated using image-based versions of 6 planning domains: 8-puzzle, 15-Puzzle, Blocksworld, Sokoban and Two variations of LightsOut.
Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
TLDR
Experiments show that NSRTs can be learned after only tens or hundreds of training episodes, and then used for fast planning in new tasks that require up to 60 actions to reach the goal and involve many more objects than were seen during training.
High-level Features for Resource Economy and Fast Learning in Skill Transfer
TLDR
This work considers two competing methods based on information loss/maximum information compression principle and the notion that abstract events tend to generate slowly changing signals, and applies them to the neural signals generated during task execution to exploit neural response dynamics to form compact representations for skill transfer.
Automated Generation of Robotic Planning Domains from Observations
TLDR
This paper introduces a novel method for generating executable plans from using one single demonstration with a 92% success rate, and 100% when the information from all demonstrations are included, even for previously unseen stacking goals.

References

SHOWING 1-10 OF 35 REFERENCES
Bottom-up learning of object categories, action effects and logical rules: From continuous manipulative exploration to symbolic planning
  • Emre Ugur, J. Piater
  • Mathematics, Computer Science
    2015 IEEE International Conference on Robotics and Automation (ICRA)
  • 2015
TLDR
This work aims for bottom-up and autonomous development of symbolic planning operators from continuous interaction experience of a manipulator robot that explores the environment using its action repertoire, and learns categories and rules encoded in Planning Domain Definition Language (PDDL), enabling symbolic planning.
Refining discovered symbols with multi-step interaction experience
  • Emre Ugur, J. Piater
  • Computer Science
    2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)
  • 2015
TLDR
This paper enables the robot to progressively update the previously learned concepts and rules in order to better deal with novel situations that appear during multi-step action executions, and proposes a system that can infer categories of the novel objects based on previously learned rules.
Active Learning for Teaching a Robot Grounded Relational Symbols
TLDR
It is demonstrated that the learned symbols can be used by a robot in a relational RL framework to learn probabilistic relational rules and use them to solve object manipulation tasks in a goal-directed manner.
Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary
TLDR
This paper proposes LatPlan, an unsupervised architecture combining deep learning and classical planning, and develops Action Autoencoder / Discriminator, a neural architecture which jointly finds the action symbols and the implicit action models (preconditions/effects), and provides a successor function for the implicit graph search.
Goal emulation and planning in perceptual space using learned affordances
TLDR
It is argued that the learning system proposed shares crucial elements with the development of infants of 7-10 months age, who explore the environment and learn the dynamics of the objects through goal-free exploration.
Representation and Integration: Combining Robot Control, High-Level Planning, and Action Learning
We describe an approach to integrated robot control, high-level planning, and action effect learning that attempts to overcome the representational difficulties that exist between these diverse
From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning
TLDR
The results establish a principled link between high-level actions and abstract representations, a concrete theoretical foundation for constructing abstract representations with provable properties, and a practical mechanism for autonomously learning abstract high- level representations.
High-level representations through unconstrained sensorimotor learning
TLDR
This paper investigates whether a deep reinforcement learning system that learns a dynamic task would facilitate the formation of high-level neural representations that might be considered as precursors of symbolic representation, which could be exploited by higher level neural circuits for better control and planning.
Using Kernel Perceptrons to Learn Action Effects for Planning
TLDR
This work investigates the problem of learning action effects in STRIPS and ADL planning domains using a kernel perceptron learning model, based on a compact vector representation as input to the learning mechanism, and resulting state changes are produced as output.
Experiments in Subsymbolic Action Planning with Mobile Robots
TLDR
This work presents a subsympolic planning mechanism that uses a non-symbolic representation of sensor-action space, learned through the agent’s autonomous interaction with the environment.
...
1
2
3
4
...