Visual closed-loop control for pouring liquids
- C. Schenck, D. Fox
- Computer ScienceIEEE International Conference on Robotics and…
- 9 October 2016
This paper develops methods for robots to use visual feedback to perform closed-loop control for pouring liquids using both a model-based and model-free method utilizing deep learning for estimating the volume of liquid in a container.
SPNets: Differentiable Fluid Dynamics for Deep Neural Networks
- C. Schenck, D. Fox
- Computer ScienceConference on Robot Learning
- 15 June 2018
This paper introduces Smooth Particle Networks (SPNets), a framework for integrating fluid dynamics with deep networks, and shows how this can be successfully used to learn fluid parameters from data, perform liquid control tasks, and learn policies to manipulate liquids.
Grounding semantic categories in behavioral interactions: Experiments with 100 objects
- J. Sinapov, C. Schenck, Kerrick Staley, Vladimir Sukhoy, A. Stoytchev
- Psychology, Computer ScienceRobotics Auton. Syst.
- 1 May 2014
See the Glass Half Full: Reasoning About Liquid Containers, Their Volume and Content
- Roozbeh Mottaghi, C. Schenck, D. Fox, Ali Farhadi
- Computer ScienceIEEE International Conference on Computer Vision
- 10 January 2017
This paper proposes methods to estimate the volume of containers, approximate the amount of liquid in them, and perform comparative volume estimations all from a single RGB image, and shows the results of the proposed model for predicting the behavior of liquids inside containers when one tilts the containers.
Learning Robotic Manipulation of Granular Media
- C. Schenck, Jonathan Tompson, S. Levine, D. Fox
- Computer ScienceConference on Robot Learning
- 8 September 2017
This paper empirically demonstrate that explicitly predicting physical mechanics results in a policy that out-performs both a hand-crafted dynamics baseline, and a "value-network", which must otherwise implicitly predict the same mechanics in order to produce accurate value estimates.
Interactive object recognition using proprioceptive and auditory feedback
- J. Sinapov, T. Bergquist, C. Schenck, Ugonna Ohiri, Shane Griffith, A. Stoytchev
- Computer ScienceInt. J. Robotics Res.
- 1 September 2011
The results show that both proprioception and audio, coupled with exploratory behaviors, can be used successfully for object recognition and the robot was able to integrate feedback from the two modalities, to achieve even better recognition accuracy.
Learning relational object categories using behavioral exploration and multimodal perception
- J. Sinapov, C. Schenck, A. Stoytchev
- PsychologyIEEE International Conference on Robotics and…
- 29 September 2014
By grounding the category representations in its own sensorimotor repertoire, the robot was able to estimate how similar two categories are in terms of the behaviors and sensory modalities that are used to recognize them.
Perceiving and reasoning about liquids using fully convolutional networks
- C. Schenck, D. Fox
- Computer ScienceInt. J. Robotics Res.
- 5 March 2017
This paper used fully convolutional neural networks to learn to detect and track liquids across pouring sequences, and shows that these networks are able to perceive and reason about liquids, and that integrating temporal information is important to performing such tasks well.
Detection and Tracking of Liquids with Fully Convolutional Networks
- C. Schenck, D. Fox
- Computer ScienceArXiv
- 20 June 2016
The results show that the best liquid detection results are achieved when aggregating data over multiple frames, in contrast to standard image segmentation, and suggest that LSTM-based neural networks have the potential to be a key component for enabling robots to handle liquids using robust, closed-loop controllers.
The Object Pairing and Matching Task : Toward Montessori Tests for Robots
- C. Schenck, A. Stoytchev
- Psychology
- 2012
The Montessori method is a popular approach to education that emphasizes student-directed learning in a controlled environment. Object matching is one common task that children perform in Montessori…
...
...