Perceiving and reasoning about liquids using fully convolutional networks

@article{Schenck2018PerceivingAR,
  title={Perceiving and reasoning about liquids using fully convolutional networks},
  author={Connor Schenck and Dieter Fox},
  journal={The International Journal of Robotics Research},
  year={2018},
  volume={37},
  pages={452 - 471}
}
  • C. Schenck, D. Fox
  • Published 5 March 2017
  • Computer Science
  • The International Journal of Robotics Research
Liquids are an important part of many common manipulation tasks in human environments. If we wish to have robots that can accomplish these types of tasks, they must be able to interact with liquids in an intelligent manner. In this paper, we investigate ways for robots to perceive and reason about liquids. That is, a robot asks the questions What in the visual data stream is liquid? and How can I use that to infer all the potential places where liquid might be? We collected two data sets to… 
3D Neural Scene Representations for Visuomotor Control
TLDR
This work shows that a dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks involving both rigid bodies and fluids, where the target is specified in a viewpoint different from what the robot operates on.
3D Neural Scene Representations for Visuomotor Control
TLDR
This work shows that a dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks involving both rigid bodies and fluids, where the target is specified in a viewpoint different from what the robot operates on.
Going Beyond Images: 3D-Aware Representation Learning for Visuomotor Control
TLDR
This work shows that a dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks involving both rigid bodies and fluids, where the target is specified in a viewpoint different from what the robot operates on.
A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms
TLDR
A formalization of the robot manipulation learning problem is described that synthesizes existing research into a single coherent framework and highlights the many remaining research opportunities and challenges.
Physics-informed Reinforcement Learning for Perception and Reasoning about Fluids
Learning and reasoning about physical phenomena is still a challenge in robotics development, and computational sciences play a capital role in the search for accurate methods able to provide
Physics perception in sloshing scenes with guaranteed thermodynamic consistency
TLDR
This work proposes a strategy to learn the full state of sloshing liquids from measurements of the free surface based on recurrent neural networks that project the limited information available to a reduced-order manifold so as to not only reconstruct the unknown information, but also to be capable of performing fluid reasoning about future scenarios in real time.
Robust Robotic Pouring using Audition and Haptics
TLDR
A multimodal pouring network (MP-Net) that is able to robustly predict liquid height by conditioning on both audition and haptics input is proposed that is robust against noise and changes to the task and environment.
Making Sense of Audio Vibration for Liquid Height Estimation in Robotic Pouring
TLDR
This paper proposes to make use of audio vibration sensing and design a deep neural network PouringNet to predict the liquid height from the audio fragment during the robotic pouring task, and facilitates a more robust and accurate audio-based perception for robotic pouring.
Haptic Perception of Liquids Enclosed in Containers
TLDR
A complementary method for liquid perception via haptic sensing that achieves error margins of less than lg and 2mL for an unknown liquid in a 600mL cylindrical container and can predict the viscosity of fluids with an accuracy of 98%.
SPNets: Differentiable Fluid Dynamics for Deep Neural Networks
TLDR
This paper introduces Smooth Particle Networks (SPNets), a framework for integrating fluid dynamics with deep networks, and shows how this can be successfully used to learn fluid parameters from data, perform liquid control tasks, and learn policies to manipulate liquids.
...
1
2
3
...

References

SHOWING 1-10 OF 54 REFERENCES
Towards Learning to Perceive and Reason About Liquids
TLDR
This paper applies fully-convolutional deep neural networks to the tasks of detecting and tracking liquids and shows that the best liquid detection results are achieved when aggregating data over multiple frames and that the LSTM network outperforms the other two in both tasks.
Humans predict liquid dynamics using probabilistic simulation
TLDR
This thesis finds evidence that people’s reasoning about how liquids move is consistent with a computational cognitive model based on approximate probabilistic simulation and extends this thesis to the more complex and unexplored domain of reasoning about liquids.
Towards Adapting Deep Visuomotor Representations from Simulated to Real Environments
TLDR
This work proposes a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search.
Adapting Deep Visuomotor Representations with Weak Pairwise Constraints
TLDR
This work proposes a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search.
"What Happens If..." Learning to Predict the Effect of Forces in Images
TLDR
A deep neural network model is designed that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks.
Visual closed-loop control for pouring liquids
  • C. Schenck, D. Fox
  • Computer Science
    2017 IEEE International Conference on Robotics and Automation (ICRA)
  • 2017
TLDR
This paper develops methods for robots to use visual feedback to perform closed-loop control for pouring liquids using both a model-based and model-free method utilizing deep learning for estimating the volume of liquid in a container.
End-to-End Training of Deep Visuomotor Policies
TLDR
This paper develops a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors, trained using a partially observed guided policy search method, with supervision provided by a simple trajectory-centric reinforcement learning method.
Reasoning About Liquids via Closed-Loop Simulation
TLDR
The results show that closed-loop simulation is an effective way to prevent large divergence between the simulated and real liquid states, and can enable reasoning about liquids that would otherwise be infeasible due to large divergences, such as reasoning about occluded liquid.
Physics for infants: characterizing the origins of knowledge about objects, substances, and number.
TLDR
The evidence supports the view that certain core principles about objects, substances, and number concepts in infancy are present as early as the authors can test for them and the nature of the underlying representation is best characterized as primitive initial concepts that are elaborated and refined through learning and experience.
Designing robot learners that ask good questions
  • M. Cakmak, A. Thomaz
  • Computer Science
    2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
  • 2012
TLDR
This paper identifies three types of questions (label, demonstration and feature queries) and discusses how a robot can use these while learning new skills and provides guidelines for designing question asking behaviors on a robot learner.
...
1
2
3
4
5
...