Error-Aware Interactive Semantic Parsing of OpenStreetMap

@article{Staniek2021ErrorAwareIS,
  title={Error-Aware Interactive Semantic Parsing of OpenStreetMap},
  author={Michael Staniek and S. Riezler},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.11739}
}
In semantic parsing of geographical queries against real-world databases such as OpenStreetMap (OSM), unique correct answers do not necessarily exist. Instead, the truth might be lying in the eye of the user, who needs to enter an interactive setup where ambiguities can be resolved and parsing mistakes can be corrected. Our work presents an approach to interactive semantic parsing where an explicit error detection is performed, and a clarification question is generated that pinpoints the… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 14 REFERENCES
SParC: Cross-Domain Semantic Parsing in Context
TLDR
An in-depth analysis of SParC is provided and it is shown that it introduces new challenges compared to existing datasets and requires generalization to unseen domains due to its cross-domain nature and the unseen databases at test time. Expand
Response-Based and Counterfactual Learning for Sequence-to-Sequence Tasks in NLP
TLDR
This thesis investigates two approaches that alleviate the need for labelled data by leveraging feedback to model outputs instead and shows that counterfactual learning can be applied to tasks with large output spaces and deterministic logs can successfully be used on sequence-to-sequence tasks for NLP. Expand
Speak to your Parser: Interactive Text-to-SQL with Natural Language Feedback
TLDR
This paper focuses on natural language to SQL systems and construct, SPLASH, a dataset of utterances, incorrect SQL interpretations and the corresponding natural language feedback, and shows that incorporating such a rich form of feedback can significantly improve the overall semantic parsing accuracy while retaining the flexibility of natural language interaction. Expand
Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task
TLDR
This work defines a new complex and cross-domain semantic parsing and text-to-SQL task so that different complicated SQL queries and databases appear in train and test sets and experiments with various state-of-the-art models show that Spider presents a strong challenge for future research. Expand
Interactive Semantic Parsing for If-Then Recipes via Hierarchical Reinforcement Learning
TLDR
This paper develops a hierarchical reinforcement learning (HRL) based agent that significantly improves the parsing performance with minimal questions to the user, and substantially outperforms non-interactive semantic parsers and rule-based agents. Expand
Learning to Learn Semantic Parsers from Natural Language Supervision
TLDR
A learning algorithm for training semantic parsers from supervision (feedback) expressed in natural language by learning a semantic parser from users’ corrections such as “no, what I really meant was before his job, not after” by also simultaneously learning to parse this natural language feedback in order to leverage it as a form of supervision. Expand
CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases
TLDR
CoSQL is presented, a corpus for building cross-domain, general-purpose database (DB) querying dialogue systems that includes SQL-grounded dialogue state tracking, response generation from query results, and user dialogue act prediction and a set of strong baselines are evaluated. Expand
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
TLDR
This work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data, and devise an easy-to-use interface to collect human feedback on semantic parses. Expand
Sequence to Sequence Learning with Neural Networks
TLDR
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. Expand
Learning from Chunk-based Feedback in Neural Machine Translation
TLDR
It is demonstrated how the common machine translation problem of domain mismatch between training and deployment can be reduced solely based on chunk-level user feedback. Expand
...
1
2
...