Understanding Physical Effects for Effective Tool-Use

  title={Understanding Physical Effects for Effective Tool-Use},
  author={Zeyu Zhang and Ziyuan Jiao and Weiqi Wang and Yixin Zhu and Song-Chun Zhu and Hangxin Liu},
  journal={IEEE Robotics and Automation Letters},
We present a robot learning and planning framework that produces an effective tool-use strategy with the least joint efforts, capable of handling objects different from training. Leveraging a Finite Element Method (FEM)-based simulator that reproduces fine-grained, continuous visual and physical effects given observed tool-use events, the essential physical properties contributing to the effects are identified through the proposed Iterative Deepening Symbolic Regression (IDSR) algorithm. We… 

Figures from this paper



KETO: Learning Keypoint Representations for Tool Manipulation

The KETO framework is presented, a framework of learning keypoint representations of tool-based manipulation that consistently outperforms state-of-the-art methods in terms of task success rates.

A Causal Approach to Tool Affordance Learning

This work introduces a method for a robot to learn an explicit model of cause-and-effect by constructing a structural causal model through a mix of observation and self-supervised experimentation, allowing a robots to reason from causes to effects and from effects to causes.

Autonomous Tool Construction Using Part Shape and Attachment Prediction

This work introduces an approach that enables the robot to construct a wider range of tools with greater computational efficiency by generating a ranking of part combinations that the robot then uses to construct and test the target tool.

TANGO: Commonsense Generalization in Predicting Tool Interactions for Mobile Manipulators

This work introduces a novel neural model, termed TANGO, for predicting task-specific tool interactions, trained using demonstrations from human teachers instructing a virtual robot in a physics simulator, and shows that by augmenting the representation of the environment with pre-trained embeddings derived from a knowledge-base, the model can generalize effectively to novel environments.

Efficient Task Planning for Mobile Manipulation: a Virtual Kinematic Chain Perspective

A Virtual Kinematic Chain perspective, a simple yet effective method, to improve task planning efficacy for mobile manipulation by consolidating the kinematics of the mobile base, the arm, and the object being manipulated collectively as a whole, naturally defines abstract actions and eliminates unnecessary predicates in describing intermediate poses.

Tool-body assimilation model considering grasping motion through deep learning

Force-and-Motion Constrained Planning for Tool Use

This paper evaluates the impact of the various constraints in some representative instances of tool use and hopes that these can serve as the basis of a benchmark problem for investigating tasks that involve many kinematic, actuation, friction, and environment constraints.

How to Select and Use Tools? : Active Perception of Target Objects Using Multimodal Deep Learning

A deep neural networks model is constructed that learns to recognize object characteristics, acquires tool–object–action relations, and generates motions for tool selection and handling, and it is shown that learning a variety of multimodal information results in rich perception for tool use.

Consolidating Kinematic Models to Promote Coordinated Mobile Manipulations

A Virtual Kinematic Chain is constructed that readily consolidates the kinematics of the mobile base, the arm, and the object to be manipulated in mobile manipulations, and results manifest that VKC-based joint modeling and planning promote task success rates and produce more efficient trajectories.

A cognitive robot equipped with autonomous tool innovation expertise

A method for learning how to use an object as a tool and, if needed, to design and construct a new tool is proposed and it is found that the system performs tool creation successfully.