Probabilistic Models for Learning a Semantic Parser Lexicon

@inproceedings{Krishnamurthy2016ProbabilisticMF,
  title={Probabilistic Models for Learning a Semantic Parser Lexicon},
  author={Jayant Krishnamurthy},
  booktitle={NAACL},
  year={2016}
}
We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical… 

Figures and Tables from this paper

Semi-Supervised Lexicon Learning for Wide-Coverage Semantic Parsing
TLDR
A graph-based semi-supervised learning framework that makes use of large text corpora and lexical resources that achieves substantial improvement when comparing to the base system that does not utilize the learned lexicon, and gains competitive results when compared to state-of-the-art systems.
Weakly-supervised Neural Semantic Parsing with a Generative
TLDR
This paper introduces a neural parser-ranker system which addresses both challenges due to the large search space and spuriousness of logical forms, and uses a neurally encoded lexicon to inject prior domain knowledge to the model.
Lifecycle of neural semantic parsing
TLDR
An improved neural semantic parser is improved, which produces syntactically valid logical forms following a transition system and grammar constrains, and is extended to a weakly-supervised setting within a parser-ranker framework.
Building a Neural Semantic Parser from a Domain Ontology
TLDR
This work crowdsource training data on six domains, covering both single-turn utterances which exhibit rich compositionality, and sequential utterances where a complex task is procedurally performed in steps, and develops neural semantic parsers which perform such compositional tasks.
Joint Concept Learning and Semantic Parsing from Natural Language Explanations
TLDR
A joint model for language interpretation (semantic parsing) and concept learning (classification) that does not require labeling statements with logical forms is presented, which prefers discriminative interpretations of statements in context of observable features of the data as a weak signal for parsing.
Syntax-mediated semantic parsing
TLDR
This thesis proposes novel theories for deriving meaning representations from dependencies, and concludes that general-purpose syntax is more useful for semantic parsing than induced task-specific syntax and syntax-agnostic representations.
A Statistical Machine Translation Model with Forest-to-Tree Algorithm for Semantic Parsing
TLDR
A novel supervised model for parsing natural language sentences into their formal semantic representations that treats sentenceto-λ-logical expression conversion within the framework of the statistical machine translation with forest-to-tree algorithm.
Parsing Natural Language Conversations using Contextual Cues
TLDR
This work forms semantic parsing of conversations as a structured prediction task, incorporating structural features that model the ‘flow of discourse’ across sequences of utterances.
Teaching Machines to Classify from Natural Language Interactions
TLDR
It is demonstrated that language can define rich and expressive features for learning tasks, and machine learning can benefit substantially from this ability, and new algorithms for semantic parsing are developed that incorporate pragmatic cues, including conversational history and sensory observation, to improve automatic language interpretation.
Semantic Parsing to Probabilistic Programs for Situated Question Answering
TLDR
P3 is presented, a novel situated question answering model that can use background knowledge and global features of the question/environment interpretation while retaining efficient approximate inference, to treat semantic parses as probabilistic programs that execute nondeterministically and whose possible executions represent environmental uncertainty.
...
...

References

SHOWING 1-10 OF 34 REFERENCES
Lexical Generalization in CCG Grammar Induction for Semantic Parsing
TLDR
An algorithm for learning factored CCG lexicons, along with a probabilistic parse-selection model, which includes both lexemes to model word meaning and templates to model systematic variation in word usage are presented.
Learning Compact Lexicons for CCG Semantic Parsing
TLDR
Methods to control the lexicon size when learning a Combinatory Categorial Grammar semantic parser are presented, achieving up to 25% error reduction and lexicons that are up to 70% smaller and are qualitatively less noisy.
Learning Dependency-Based Compositional Semantics
TLDR
A new semantic formalism, dependency-based compositional semantics (DCS) is developed and a log-linear distribution over DCS logical forms is defined and it is shown that the system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms.
Learning for Semantic Parsing with Statistical Machine Translation
TLDR
It is shown that WASP performs favorably in terms of both accuracy and coverage compared to existing learning methods requiring similar amount of supervision, and shows better robustness to variations in task complexity and word order.
Driving Semantic Parsing from the World’s Response
TLDR
This paper develops two novel learning algorithms capable of predicting complex structures which only rely on a binary feedback signal based on the context of an external world and reformulates the semantic parsing problem to reduce the dependency of the model on syntactic patterns, thus allowing the parser to scale better using less supervision.
Building a Semantic Parser Overnight
TLDR
A new methodology is introduced that uses a simple grammar to generate logical forms paired with canonical utterances that are meant to cover the desired set of compositional operators and uses crowdsourcing to paraphrase these canonical utterance into natural utterances.
Semantic Parsing via Paraphrasing
TLDR
This paper presents two simple paraphrase models, an association model and a vector space model, and trains them jointly from question-answer pairs, improving state-of-the-art accuracies on two recently released question-answering datasets.
Parsing Time: Learning to Interpret Time Expressions
TLDR
A probabilistic approach for learning to interpret temporal phrases given only a corpus of utterances and the times they reference is presented, achieving an accuracy of 72% on an adapted TempEval-2 task -- comparable to state of the art systems.
Confidence Driven Unsupervised Semantic Parsing
TLDR
It is argued that a semantic parser can be trained effectively without annotated data, and an unsupervised learning algorithm is introduced, which takes a self training approach driven by confidence estimation.
Using String-Kernels for Learning Semantic Parsers
TLDR
This work presents a new approach for mapping natural language sentences to their formal meaning representations using string-kernel-based classifiers, which compares favorably to other existing systems and is particularly robust to noise.
...
...