Functional object-oriented network for manipulation learning

@article{Paulius2016FunctionalON,
  title={Functional object-oriented network for manipulation learning},
  author={David Paulius and Yongqiang Huang and Roger Milton and William D. Buchanan and Jeanine Sam and Yu Sun},
  journal={2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2016},
  pages={2655-2662}
}
  • D. Paulius, Yongqiang Huang, Yu Sun
  • Published 1 October 2016
  • Computer Science
  • 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
This paper presents a novel structured knowledge representation called the functional object-oriented network (FOON) to model the connectivity of the functional-related objects and their motions in manipulation tasks. The graphical model FOON is learned by observing object state change and human manipulations with the objects. Using a well-trained FOON, robots can decipher a task goal, seek the correct objects at the desired states on which to operate, and generate a sequence of proper… 

Figures from this paper

A Weighted Functional Object-Oriented Network for Task Planning.
TLDR
This work proposes human-robot collaboration (HRC) as a solution to robotic programming with FOON and shows that tasks can be completed successfully with the aid of the assistant and instruction from the robot while minimizing the effort needed from the human.
A Road-map to Robot Task Execution with the Functional Object-Oriented Network
TLDR
This work outlines a road-map for future development of FOON and its application in robotic systems for task planning as well as knowledge acquisition from demonstration and proposes preliminary ideas to show how a FOON can be created in a real-world scenario with a robot and human teacher in a way that can jointly augment existing knowledge.
Functional Object-Oriented Network: Considering Robot's Capability in Human-Robot Collaboration
TLDR
This work explores human-robot collaborative planning using the functional object-oriented network, a graphical knowledge representation for manipulations that can be performed by domestic robots, and shows that the best task tree can be found with the adequate chance of success in completing three activities while minimizing the effort needed from the human assistant.
Evaluating Recipes Generated from Functional Object-Oriented Network
TLDR
This preliminary study finds no significant difference between the recipes in Recipe1M+ and the recipes generated from FOON task trees in terms of correctness, completeness, and clarity.
Functional Object-Oriented Network: Construction & Expansion
TLDR
This work builds upon the functional object-oriented network (FOON), a structured knowledge representation which is constructed from observations of human activities and manipulations, and discusses two means of generalization: expanding the network through the use of object similarity to create new functional units from those the authors already have, and compressing the functional units by object categories rather than specific objects.
AI Meets Physical World - Exploring Robot Cooking
  • Yu Sun
  • Computer Science
    ArXiv
  • 2018
TLDR
The recent research effort to bring the computer intelligence into the physical world so that robots could perform physically interactive manipulation tasks developed new grasping strategies for robots to hold objects with a firm grasp to withstand the disturbance during physical interactions.
Cooking Preparation Knowledge using the Functional Object-Oriented Network
We developed the functional object-oriented network (FOON) as a graphical knowledge representation for manipulations that can be performed by domestic robots. This bipartite representation focuses on
AI Meets Physical World – Exploring Robot Cooking
TLDR
The recent research effort to bring the computer intelligence into the physical world so that robots could perform physically interactive manipulation tasks developed new grasping strategies for robots to hold objects with a firm grasp to withstand the disturbance during physical interactions.
Generalizable task representation learning from human demonstration videos: a geometric approach
TLDR
This work proposes CoVGS-IL, which uses a graphstructured task function to learn task representations under structural constraints and enables task generalization by selecting geometric features from different objects whose inner connection relationships define the same task in geometric constraints.
A Survey of Knowledge Representation and Retrieval for Learning in Service Robotics
TLDR
This paper looks at three broad categories involved in task representation and retrieval for robotics: 1) activity recognition from demonstrations, 2) scene understanding and interpretation, and 3) task representation in robotics datasets and networks.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 51 REFERENCES
Object-object interaction affordance learning
Learning the semantics of object–action relations by observation
TLDR
This study introduces a novel representation of the relations between objects at decisive time points during a manipulation by encoding the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge.
Categorizing object-action relations from semantic scene graphs
TLDR
A novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization, grounded in the affordance principle, which has recently attracted much attention in robotics.
Human-object-object-interaction affordance
This paper presents a novel human-object-object (HOO) interaction affordance learning approach that models the interaction motions between paired objects in a human-object-object way and use the
Detection of Manipulation Action Consequences (MAC)
TLDR
This paper proposes that a fundamental concept in understanding such actions, are the consequences of actions, and provides a new dataset, called Manipulation Action Consequences (MAC 1.0), which can serve as test bed for other studies on this topic.
Visual object-action recognition: Inferring object affordances from human demonstration
Learning the Semantics of Manipulation Action
TLDR
A formal computational framework is presented for modeling manipulation actions based on a Combinatory Categorial Grammar that leads to semantics of manipulation action and has applications to both observing and understanding human manipulation actions as well as executing them with a robotic mechanism.
Manipulation action tree bank: A knowledge resource for humanoids
TLDR
It is believed that tree banks are an effective and practical way to organize semantic structures of manipulation actions for humanoids applications and could be used as basis for automatic manipulation action understanding and execution and reasoning and prediction during both observation and execution.
...
1
2
3
4
5
...