• Corpus ID: 10761261

What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA

  title={What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA},
  author={Mengqiu Wang and Noah A. Smith and Teruko Mitamura},
  booktitle={Conference on Empirical Methods in Natural Language Processing},
This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D… 

Tables from this paper

Feature-Rich Translation by Quasi-Synchronous Lattice Parsing

A novel decoder based on lattice parsing with quasi-synchronous grammar that does not require source and target trees to be isomorphic is presented, using generic approximate dynamic programming techniques to handle "non-local" features.

with Quasi-Synchronous Grammar Features

We connect two scenarios in structured learning: adapting a parser trained on one corpus to another annotation style, and projecting syntactic annotations from one language to another. We propose

Deep Learning for Answer Sentence Selection

This work proposes a novel approach to solving the answer sentence selection task via means of distributed representations, and learns to match questions with answers by considering their semantic encoding.

Parser Adaptation and Projection with Quasi-Synchronous Grammar Features

We connect two scenarios in structured learning: adapting a parser trained on one corpus to another annotation style, and projecting syntactic annotations from one language to another. We propose

Phrase Dependency Machine Translation with Quasi-Synchronous Tree-to-Tree Features

A tree-to-tree machine translation system inspired by quasi-synchronous grammar is presented that combines phrases and dependency syntax, integrating the advantages of phrase-based and syntax-based translation.

Quasi-Synchronous Phrase Dependency Grammars for Machine Translation

This work presents a quasi-synchronous dependency grammar for machine translation in which the leaves of the tree are phrases rather than words as in previous work, and describes a method of extracting phrase dependencies from parallel text using a target-side dependency parser.

Cross-Lingual GenQA: A Language-Agnostic Generative Question Answering Approach for Open-Domain Question Answering

This paper presents the first generalization of the GENQA approach for the multilingual environment, and presents the GEN-TYDIQA dataset, which extends the TyDiQA evaluation data with natural-sounding, well-formed answers in Arabic, Bengali, English, Japanese, and Russian.

Stacking Dependency Parsers

Experiments on twelve languages show that stacking transition-based and graph-based parsers improves performance over existing state-of-the-art dependency parsers.

Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering

This work captures the alignment by using a novel probabilistic model that models tree-edit operations on dependency parse trees and treats alignments as structured latent variables, and offers a principled framework for incorporating complex linguistic features.

A Study on Efficiency, Accuracy and Document Structure for Answer Sentence Selection

This paper argues that by exploiting the intrinsic structure of the original rank together with an effective word-relatedness encoder, this model achieves the highest accuracy among the cost-efficient models, with two orders of magnitude fewer parameters than the current state of the art.



Scalable Inference and Training of Context-Rich Syntactic Translation Models

This paper takes the framework for acquiring multi-level syntactic translation rules of (Galley et al., 2004) from aligned tree-string pairs, and presents two main extensions of their approach: instead of merely computing a single derivation that minimally explains a sentence pair, a large number of derivations that include contextually richer rules, and account for multiple interpretations of unaligned words.

Quasi-Synchronous Grammars: Alignment by Soft Projection of Syntactic Dependencies

This work presents a new model of the translation process: quasi-synchronous grammar (QG), and evaluates the cross-entropy of QGs on unseen text and shows that a better fit to bilingual data is achieved by allowing greater syntactic divergence.

Mapping Dependencies Trees: An Application to Question Answering

An approach for answer selection in a free form question answering task is described, representing both questions and candidate passages using dependency trees, and incorporating semantic information such as named entities in this representation.

A Noisy-Channel Approach to Question Answering

This work introduces a probabilistic noisy-channel model for question answering and shows how it can be exploited in the context of an end-to-end QA system, and shows that the model is flexible enough to accommodate within one mathematical framework many QA-specific resources and techniques.

Learning to recognize features of valid textual entailments

This paper advocates a new architecture for textual inference in which finding a good alignment is separated from evaluating entailment, and proposes a pipelined approach where alignment is followed by a classification step, in which features represent high-level characteristics of the entailment problem.

Dependency Treelet Translation: Syntactically Informed Phrasal SMT

An efficient decoder is described and it is shown that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser.

Selectively Using Relations to Improve Precision in Question Answering

By indexing syntactic relations that can be reliably extracted from corpus text and matching questions with documents at the relation level, it is demonstrated that syntactic analysis enables a question answering system to successfully handle these phenomena, thereby improving precision.

Exploring Syntactic Relation Patterns for Question Answering

A pattern extraction method to extract the various relations between the proper answers and different types of question words from syntactic trees is proposed and it makes the more tolerant matching between two patterns and helps to solve the data sparseness problem.

Recognizing Paraphrases and Textual Entailment Using Inversion Transduction Grammars

Experimental results on the MSR Paraphrase Corpus show that, even in the absence of any thesaurus to accommodate lexical variation between the paraphrases, an uninterpolated average precision of at least 76% is obtainable from the Bracketing ITG's structure matching bias alone.

Monolingual Machine Translation for Paraphrase Generation

Human evaluation shows that this SMT system outperforms baseline paraphrase generation techniques and, in a departure from previous work, offers better coverage and scalability than the current best-of-breed paraphrasing approaches.