Improved Representation Learning for Question Answer Matching

@inproceedings{Tan2016ImprovedRL,
  title={Improved Representation Learning for Question Answer Matching},
  author={Ming Tan and C{\'i}cero Nogueira dos Santos and Bing Xiang and Bowen Zhou},
  booktitle={ACL},
  year={2016}
}
Passage-level question answer matching is a challenging task since it requires effective representations that capture the complex semantic relations between questions and answers. [] Key Method To match passage answers to questions accommodating their complex semantic relations, unlike most previous work that utilizes a single deep learning structure, we develop hybrid models that process the text using both convolutional and recurrent neural networks, combining the merits on extracting linguistic…

Figures and Tables from this paper

Supervised attention for answer selection in community question answering

TLDR
This paper proposes integrating supervised attention into matchLSTM, and leverages lexical-semantic from external to guide the learning of attention weights for question-answer pairs, which learns more meaningful attention that allows performing better than the basic model.

Text‐based question answering from information retrieval and deep neural network perspectives: A survey

TLDR
This paper provides a comprehensive overview of different models proposed for the QA task, including both a traditional IR perspective and a more recent deep neural network environment.

Exploiting Background Knowledge in Compact Answer Generation for Why-Questions

TLDR
A novel neural summarizer that combines a recurrent neural network-based encoderdecoder model with stacked convolutional neural networks and was designed to effectively exploit background knowledge, in this case a set of causal relations that was extracted from a large web data archive.

Improved Compare-Aggregate Model for Chinese Document-Based Question Answering

TLDR
A Compare-Aggregate architecture to handle the word-level comparison and aggregation and the k-top attention mechanism is proposed to filter out irrelevant words to deal with the noisy information in traditional attention mechanism.

Intelligent Question Answering in Restricted Domains Using Deep Learning and Question Pair Matching

TLDR
The proposed model fused convolutional neural network and bidirectional long short-term memory network which performs efficient semantic analysis on the question pairs to extract more effective features of the text and introduced the method of question pair matching to implement the Chinese intelligent question answering in a restricted domain.

Answer Selection in Community Question Answering by Normalizing Support Answers

TLDR
A novel way to leverage the contributions of support answers is proposed: the match scores which are firstly normalized by the correlations between the question and the corresponding similar questions, such that the negative effect from the noisy answers can be reduced.

Interactive knowledge-enhanced attention network for answer selection

TLDR
An interactive knowledge-enhanced attention network for answer selection ( IKAAS), which interactively learns the sentence representations of query–answer pairs by simultaneously considering the external knowledge from knowledge graphs and textual information of QA pairs is proposed.

A Compare-Aggregate Model with Embedding Selector for Answer Selection

  • Shengxie ZhengJuan Yang
  • Computer Science
    2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS)
  • 2018
TLDR
This paper proposes a novel Compare-Aggregate framework with embedding selector to solve answer selection task and employs two types of Attention mechanism in a model and add a selector layer to choose a best input for aggregation layer.

A Stacked BiLSTM Neural Network Based on Coattention Mechanism for Question Answering

TLDR
A stacked Bidirectional Long Short-Term Memory (BiLSTM) neural network based on the coattention mechanism to extract the interaction between questions and answers, combining cosine similarity and Euclidean distance to score the question and answer sentences is proposed.

Coattention based BiLSTM for answer selection

  • Lei ZhangLongxuan Ma
  • Computer Science
    2017 IEEE International Conference on Information and Automation (ICIA)
  • 2017
TLDR
This work proposes a coattention based bidirectional LSTM to capture the interactions between the question and the answers, which generates different question representations according to the answers.
...

References

SHOWING 1-10 OF 37 REFERENCES

Deep Learning for Answer Sentence Selection

TLDR
This work proposes a novel approach to solving the answer sentence selection task via means of distributed representations, and learns to match questions with answers by considering their semantic encoding.

A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering

TLDR
The proposed method uses a stacked bidirectional Long-Short Term Memory network to sequentially read words from question and answer sentences, and then outputs their relevance scores, which outperforms previous work which requires syntactic features and external knowledge resources.

Question Answering Using Enhanced Lexical Semantic Models

TLDR
This work focuses on improving the performance using models of lexical semantic resources and shows that these systems can be consistently and significantly improved with rich lexical semantics information, regardless of the choice of learning algorithms.

Answer Extraction as Sequence Tagging with Tree Edit Distance

TLDR
A linear-chain Conditional Random Field based on pairs of questions and their possible answer sentences, learning the association between questions and answer types is constructed, casting answer extraction as an answer sequence tagging problem for the first time.

Convolutional Neural Tensor Network Architecture for Community-Based Question Answering

TLDR
This paper proposes a convolutional neural tensor network architecture to encode the sentences in semantic space and model their interactions with a tensor layer, which outperforms the other methods on two matching tasks.

ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs

TLDR
This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences and proposes three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart.

Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering

TLDR
This work captures the alignment by using a novel probabilistic model that models tree-edit operations on dependency parse trees and treats alignments as structured latent variables, and offers a principled framework for incorporating complex linguistic features.

Applying deep learning to answer selection: A study and an open task

TLDR
A general deep learning framework is applied to address the non-factoid question answering task and demonstrates superior performance compared to the baseline methods and various technologies give further improvements.

Automatic Feature Engineering for Answer Selection and Extraction

TLDR
The results show that the models greatly improve on the state of the art, e.g., up to 22% on F1 (relative improvement) for answer extraction, while using no additional resources and no manual feature engineering.

What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA

TLDR
A probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation, and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model is proposed.