Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning

  title={Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning},
  author={Tom Silver and Rohan Chitnis and Nishanth Kumar and Willie McClinton and Tomas Lozano-Perez and Leslie Pack Kaelbling and Joshua B. Tenenbaum},
Effective and efficient planning in continuous state and action spaces is fundamentally hard, even when the transition model is deterministic and known. One way to alleviate this challenge is to perform bilevel planning with abstractions, where a high-level search for abstract plans is used to guide planning in the original transition space. In this paper, we develop a novel framework for learning state and action abstractions that are explicitly optimized for both effective (successful) and… 
Learning Neuro-Symbolic Skills for Bilevel Planning
The approach — bilevel planning with neuro-symbolic skills — can solve a wide range of tasks with varying initial states, goals, and objects, outperforming six baselines and ablations.
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
These experiments validate that SayCan can execute temporally-extended, complex, and abstract instructions andGrounding the LLM in the real-world via affordances nearly doubles the performance over the non-grounded baselines.


Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Neuro-Symbolic Relational Transition Models (NSRTs), a novel class of models that are data-efficient to learn, compatible with powerful robotic planning methods, and gener-alizable over objects, are addressed.
RePReL: Integrating Relational Planning and Reinforcement Learning for Effective Abstraction
The experiments clearly show that RePReL framework not only achieves better per- formance and efficient learning on the task at hand but also demonstrates better generalization to unseen tasks.
Active Learning of Abstract Plan Feasibility
This work presents an active learning approach to efficiently acquire an APF predictor through task-independent, curious exploration on a robot, and leverages an infeasible subsequence property to prune candidate plans in the active learning strategy, allowing the system to learn from less data.
Learning compositional models of robot skills for task and motion planning
This work uses Gaussian process methods for learning the constraints on skill effectiveness from small numbers of expensive-to-collect training examples and develops efficient adaptive sampling methods for generating a comprehensive and diverse sequence of continuous candidate control parameter values during planning.
From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning
The results establish a principled link between high-level actions and abstract representations, a concrete theoretical foundation for constructing abstract representations with provable properties, and a practical mechanism for autonomously learning abstract high- level representations.
Discovering State and Action Abstractions for Generalized Task and Motion Planning
An algorithm for learning features, abstractions, and generalized plans for continuous robotic task and motion planning (TAMP) is proposed and the unique difficulties that arise when forced to consider geometric and physical con- straints as a part of the generalized plan are examined.
CAMPs: Learning Context-Specific Abstractions for Efficient Planning in Factored MDPs
The context-specific abstract Markov decision process (CAMP) is proposed, an abstraction of a factored MDP that affords efficient planning and how to learn constraints to impose so the CAMP optimizes a trade-off between rewards and computational cost is described.
Learning Symbolic Operators for Task and Motion Planning
This work proposes a bottom-up relational learning method for operator learning and shows how the learned operators can be used for planning in a TAMP system, finding this approach to substantially outperform several baselines, including three graph neural network-based model-free approaches from the recent literature.
Learning Grounded Relational Symbols from Continuous Data for Abstract Reasoning
This paper presents an approach for learning symbolic relational abstractions of geometric features such that these symbols enable a robot to learn abstract transition models and to use them for goal-directed planning of motor primitive sequences.
DeepSym: Deep Symbol Generation and Rule Learning from Unsupervised Continuous Robot Interaction for Planning
A novel and general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them that can be used in complex action planning and is verified in a physics-based 3d simulation environment.