• Corpus ID: 35296162

Robot-Initiated Specification Repair through Grounded Language Interaction

@article{Boteanu2017RobotInitiatedSR,
  title={Robot-Initiated Specification Repair through Grounded Language Interaction},
  author={Adrian Boteanu and Jacob Arkin and Siddharth Patki and Thomas M. Howard and Hadas Kress-Gazit},
  journal={ArXiv},
  year={2017},
  volume={abs/1710.01417}
}
Robots are required to execute increasingly complex instructions in dynamic environments, which can lead to a disconnect between the user's intent and the robot's representation of the instructions. In this paper we present a natural language instruction grounding framework which uses formal synthesis to enable the robot to identify necessary environment assumptions for the task to be successful. These assumptions are then expressed via natural language questions referencing objects in the… 

Figures from this paper

NATURALLANGUAGE INTERACTIONWITH SYNTHESIS BASED CONTROLFOR SIMULATED FREE-FLYING ROBOTS
TLDR
The ability to inject declarative knowledge into a system that uses grounded language to generate specifications for reactive controllers is demonstrated via experiments conducted with the Astrobee platform a simulated space station.
Grounded Language Learning: Where Robotics and NLP Meet
TLDR
An overview of the research area, selected recent advances, and some future directions and challenges that remain are given.
Formal Dialogue Model for Language Grounding Error Recovery
TLDR
This work addresses the problem of language grounding errors and robot mistakes by enabling the robot to ask questions that differentiate between the beam searched set of k-most-likely groundings, and evaluates the beam search, maximal semantic differencing, and user clarification components separately– then extrapolate to estimate the performance and accuracy of this dialog model as an end-to-end system in practice.
Enabling Fast Instruction-Based Modification of Learned Robot Skills
TLDR
A skill modification framework is introduced that allows users to modify a robot’s stored skills quickly through instructions to reduce inefficiencies, fix errors, and enable generalizations, all in a way for modified skills to be immediately available for task performance.
Iterative Repair of Social Robot Programs from Implicit User Feedback via Bayesian Inference
TLDR
This work proposes the use of iterative program repair, where programmers create an initial program sketch in the new Social Robot Program Transition Sketch Language (SoRTSketch), a domain-specific language that supports expressing uncertainties related to thresholds in transition functions.
Robots That Use Language
TLDR
This article surveys the use of natural language in robotics from a robotics point of view, where robots must map words to aspects of the physical world mediated by the robot's brain.
Spoken Language Interaction with Robots: Research Issues and Recommendations, Report from the NSF Future Directions Workshop
TLDR
This report identifies key scientific and engineering advances needed to enable robots to communicate in new environments, for new tasks, and with diverse user populations, without extensive re-engineering or the collection of massive training data.
Specifying and Interpreting Reinforcement Learning Policies through Simulatable Machine Learning
TLDR
A novel collaborative framework to enable humans to initialize and interpret an autonomous agent’s behavior, rooted in principles of human-centered design, and produces explanations of the final learned policy, in multiple modalities, to provide the user a final depiction about the learned policy of the autonomous agent.
SHARING LEARNED MODELS BETWEEN HETEROGENEOUS ROBOTS: AN IMAGE DRIVEN INTERPRETATION
TLDR
This dissertation proposes the ‘chained learning approach’ to transfer data between robots with di↵erent perceptual capabilities to lay a foundation for transfer learning in a heterogeneous robot environment while introducing domain adaptation as a potential research option for grounded language transfer.
Adaptive Grasp Control through Multi-Modal Interactions for Assistive Prosthetic Devices
TLDR
The approach explored here is to develop algorithms that permit a device to adapt its behavior to the preferences of the operator through interactions with the wearer through interactions on a platform used to evaluate this architecture.
...
...

References

SHOWING 1-10 OF 26 REFERENCES
Learning to Parse Natural Language Commands to a Robot Control System
TLDR
This work discusses the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system, and learns a parser based on example pairs of English commands and corresponding control language expressions.
Towards Robot Adaptability in New Situations
TLDR
A system that integrates robot task execution with user input and feedback at multiple abstraction levels in order to achieve greater adaptability in new environments and proposes extensions that leverage crowdsourced input to reduce the need for direct input.
Translating Structured English to Robot Controllers
TLDR
This paper takes the first steps toward building a natural language interface for LTL planning methods with mobile robots as the application domain by building a structured English language which maps directly to a fragment of LTL.
A natural language planner interface for mobile manipulators
TLDR
This paper presents a new model called the Distributed Correspondence Graph (DCG) to infer the most likely set of planning constraints from natural language instructions, and presents experimental results from comparative experiments that demonstrate improvements in efficiency in natural language understanding without loss of accuracy.
Grounding the Interaction: Anchoring Situated Discourse in Everyday Human-Robot Interaction
This paper presents how extraction, representation and use of symbolic knowledge from real-world perception and human-robot verbal and non-verbal interaction can actually enable a grounded and shared
A model for verifiable grounding and execution of complex natural language instructions
TLDR
The Verifiable Distributed Correspondence Graph (V-DCG) model is introduced, which enables the validation of natural language instructions by using Linear Temporal Logic specifications together with physical world groundings.
Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation
TLDR
This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments that dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure.
Sorry Dave, I'm Afraid I Can't Do That: Explaining Unachievable Robot Tasks Using Natural Language
TLDR
This work describes an integrated system that combines the power of formal methods with the accessibility of natural language, providing correct-by-construction controllers for high-level specifications, and easy-to-understand feedback to the user on those that cannot be achieved.
Asking for Help Using Inverse Semantics
TLDR
This work demonstrates an approach for enabling a robot to recover from failures by communicating its need for specific help to a human partner using natural language, and presents a novel inverse semantics algorithm for generating effective help requests.
Training Personal Robots Using Natural Language Instruction
TLDR
The authors are designing a practical system that uses natural language to instruct a vision-based robot to adapt to their particular needs.
...
...