Assessing the Impact of Syntactic and Semantic Structures for Answer Passages Reranking

@article{Tymoshenko2015AssessingTI,
  title={Assessing the Impact of Syntactic and Semantic Structures for Answer Passages Reranking},
  author={K. Tymoshenko and Alessandro Moschitti},
  journal={Proceedings of the 24th ACM International on Conference on Information and Knowledge Management},
  year={2015}
}
  • K. Tymoshenko, Alessandro Moschitti
  • Published 17 October 2015
  • Computer Science
  • Proceedings of the 24th ACM International on Conference on Information and Knowledge Management
In this paper, we extensively study the use of syntactic and semantic structures obtained with shallow and deeper syntactic parsers in the answer passage reranking task. We propose several dependency-based structures enriched with Linked Open Data (LD) knowledge for representing pairs of questions and answer passages. We use such tree structures in learning to rank (L2R) algorithms based on tree kernel. The latter can represent questions and passages in a tree fragment space, where each… 
Shallow and Deep Syntactic/Semantic Structures for Passage Reranking in Question-Answering Systems
TLDR
This article extensively study the use of syntactic and semantic structures obtained with shallow and full syntactic parsers for answer passage reranking and derived the following important findings: relational syntactic structures are essential to achieve superior results and models trained with dependency trees can outperform those trained with shallow trees.
RelTextRank: An Open Source Framework for Building Relational Syntactic-Semantic Text Pair Representations
We present a highly-flexible UIMA-based pipeline for developing structural kernelbased systems for relational learning from text, i.e., for generating training and test data for ranking, classifying
Learning to Re-Rank Questions in Community Question Answering Using Advanced Features
TLDR
This work compares the learning to rank (L2R) algorithms against a strong baseline given by the Google rank (GR), and shows that improving GR requires effective BoW features and TKs along with an accurate model of GR features in the used L2R algorithm.
Syntactic based approach for grammar question retrieval
Addressing Community Question Answering in English and Arabic
TLDR
The results show that the shallow structures used in the TKs are robust enough to noisy data and improving GR is possible, but effective BoW features and Tks along with an accurate model of GR features in the used L2R algorithm are required.
A Syntactic Parse-Key Tree-Based Approach for English Grammar Question Retrieval
TLDR
A syntactic parse-key tree based approach for English grammar question retrieval which can find relevant grammar questions with similar grammatical focus effectively and outperforms other classical text and sentence retrieval methods in accuracy.
Learning Semantic Relatedness in Community Question Answering Using Neural Models
TLDR
A neural-based model is presented with stacked bidirectional LSTMs and MLP that generates the vector representations of the question-question or question-answer pairs and computes their semantic similarity scores, which are then employed to rank and predict relevancies.
Sequential Attention with Keyword Mask Model for Community-based Question Answering
TLDR
A Sequential Attention with Keyword Mask model (SAKM) is proposed for CQA to imitate human reading behavior and allows to extract meaningful keywords from the sentences and enhance diverse mutual information.
Answer selection in community question answering exploiting knowledge graph and context information
TLDR
This paper proposes a novel answer selection method in CQA by using the knowledge embedded in KGs and learns a latent-variable model for learning the representations of the question and answer, jointly optimizing generative and discriminative objectives.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 48 REFERENCES
Encoding Semantic Resources in Syntactic Structures for Passage Reranking
TLDR
The experiments with the SVM rank algorithm on the TREC Question Answering (QA) corpus show that the added relational information highly improves over the state of the art, e.g., about 15.4% of relative improvement in P@1.
Building structures from classifiers for passage reranking
This paper shows that learning to rank models can be applied to automatically learn complex patterns, such as relational semantic structures occurring in questions and their answer passages. This is
Passage Reranking for Question Answering Using Syntactic Structures and Answer Types
TLDR
This work achieves a better ranking by aligning the syntactic structures based on the question's answer type and detected named entities in the candidate passage by outperforming the baselines over all ranks in terms of the MRR measure.
Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering
TLDR
This work captures the alignment by using a novel probabilistic model that models tree-edit operations on dependency parse trees and treats alignments as structured latent variables, and offers a principled framework for incorporating complex linguistic features.
Answer Extraction as Sequence Tagging with Tree Edit Distance
TLDR
A linear-chain Conditional Random Field based on pairs of questions and their possible answer sentences, learning the association between questions and answer types is constructed, casting answer extraction as an answer sequence tagging problem for the first time.
Rank learning for factoid question answering with linguistic and semantic constraints
TLDR
This work presents a general rank-learning framework for passage ranking within Question Answering (QA) systems using linguistic and semantic features, and shows that constraints based on semantic role labeling features are particularly effective for passage retrieval.
Question Answering Using Enhanced Lexical Semantic Models
TLDR
This work focuses on improving the performance using models of lexical semantic resources and shows that these systems can be consistently and significantly improved with rich lexical semantics information, regardless of the choice of learning algorithms.
Automatic Feature Engineering for Answer Selection and Extraction
TLDR
The results show that the models greatly improve on the state of the art, e.g., up to 22% on F1 (relative improvement) for answer extraction, while using no additional resources and no manual feature engineering.
Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions
TLDR
A logistic regression model that uses 33 syntactic features of edit sequences to classify the sentence pairs and leads to competitive performance in recognizing textual entailment, paraphrase identification, and answer selection for question answering.
A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering
TLDR
The proposed method uses a stacked bidirectional Long-Short Term Memory network to sequentially read words from question and answer sentences, and then outputs their relevance scores, which outperforms previous work which requires syntactic features and external knowledge resources.
...
1
2
3
4
5
...