Characterizing Question Facets for Complex Answer Retrieval

@article{MacAvaney2018CharacterizingQF,
  title={Characterizing Question Facets for Complex Answer Retrieval},
  author={Sean MacAvaney and Andrew Yates and Arman Cohan and Luca Soldaini and Kai Hui and Nazli Goharian and Ophir Frieder},
  journal={The 41st International ACM SIGIR Conference on Research \& Development in Information Retrieval},
  year={2018}
}
  • Sean MacAvaney, Andrew Yates, +4 authors O. Frieder
  • Published 2 May 2018
  • Computer Science
  • The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval
Complex answer retrieval (CAR) is the process of retrieving answers to questions that have multifaceted or nuanced answers. In this work, we present two novel approaches for CAR based on the observation that question facets can vary in utility: from structural (facets that can apply to many similar topics, such as 'History') to topical (facets that are specific to the question's topic, such as the 'Westward expansion' of the United States). We first explore a way to incorporate facet utility… Expand

Figures, Tables, and Topics from this paper

Overcoming low-utility facets for complex answer retrieval
TLDR
This work proposes two estimators of facet utility: the hierarchical structure of CAR queries, and facet frequency information from training data, and includes entity similarity scores using embeddings trained from a CAR knowledge graph, which captures the context of facets. Expand
UTD HLTRI at TREC 2017: Complex Answer Retrieval Track
TLDR
The evaluation results obtained forCAPAR revealed that the Siamese Attention Network (SANet) for Pairwise Ranking outperformed AdaRank as the L2R approach for CAPAR. Expand
Local and Global Query Expansion for Hierarchical Complex Topics
TLDR
It is found that leveraging the hierarchical topic structure is needed for both local and global expansion methods to be effective, and entity-based expansion methods show significant gains over word-based models alone. Expand
Training Curricula for Open Domain Answer Re-Ranking
TLDR
This work proposes several heuristics to estimate the difficulty of a given training sample and shows that this approach leads to superior performance of two leading neural ranking architectures, namely BERT and ConvKNRM, using both pointwise and pairwise losses. Expand
Neural architecture for question answering using a knowledge graph and web corpus
TLDR
This work presents AQQUCN, a QA system that gracefully combines KG and corpus evidence, and aggregates signals from KGs and large corpora to directly rank KG entities, rather than commit to one semantic interpretation of the query. Expand
TREC Complex Answer Retrieval Overview
TLDR
It is seen that combining traditional methods with learning-to-rank can outperform neural methods, even when many training queries are available, in TREC Complex Answer Retrieval. Expand
PACRR Gated Expansion for TREC CAR 2018
TLDR
This work submitted two passage retrieval runs to the 2018 TREC Complex Answer Retrieval (CAR) task, using the state-of-the-art technique from TREC CAR 2017 and a novel gated technique for incorporating query expansion terms in a neural ranker. Expand
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
TLDR
SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section, and reports a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain. Expand
Preference Relationship-Based CrossCMN Scheme for Answer Ranking in Community QA
TLDR
A novel scheme, named PW-CrossCMN, which ranks the candidate answers by pair-wise approach based on numerous historical documents and applies the preference relationship into deep learning framework, and has more excellent performance in answer ranking task compared with several state-of-the-art baselines. Expand
Is Language Modeling Enough? Evaluating Effective Embedding Combinations
TLDR
It is observed that adding topic model based embeddings helps for most tasks and that differing pre-training tasks encode complementary features, and new state of the art results on the MPQA and SUBJ tasks in SentEval are presented. Expand
...
1
2
...

References

SHOWING 1-10 OF 12 REFERENCES
UTD HLTRI at TREC 2017: Complex Answer Retrieval Track
TLDR
The evaluation results obtained forCAPAR revealed that the Siamese Attention Network (SANet) for Pairwise Ranking outperformed AdaRank as the L2R approach for CAPAR. Expand
Benchmark for Complex Answer Retrieval
TLDR
The new TREC Complex Answer Retrieval (TREC CAR) track introduces a large-scale dataset where paragraphs are to be retrieved in response to outlines of Wikipedia articles representing complex information needs. Expand
TREC Complex Answer Retrieval Overview
TLDR
It is seen that combining traditional methods with learning-to-rank can outperform neural methods, even when many training queries are available, in TREC Complex Answer Retrieval. Expand
A Position-Aware Deep Model for Relevance Matching in Information Retrieval
TLDR
This work presents a novel model architecture consisting of convolutional layers to capture term dependencies and proximity among query term occurrences, followed by a recurrent layer to capture relevance over di‚erent query terms. Expand
DeepRank: A New Deep Architecture for Relevance Ranking in Information Retrieval
TLDR
Experiments on both benchmark LETOR dataset and a large scale clickthrough data show that DeepRank can significantly outperform learning to ranking methods, and existing deep learning methods. Expand
A Deep Relevance Matching Model for Ad-hoc Retrieval
TLDR
A novel deep relevance matching model (DRMM) for ad-hoc retrieval that employs a joint deep architecture at the query term level for relevance matching and can significantly outperform some well-known retrieval models as well as state-of-the-art deep matching models. Expand
Learning to Match using Local and Distributed Representations of Text for Web Search
TLDR
This work proposes a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that Matching with distributed representations complements matching with traditional local representations. Expand
A Study of MatchPyramid Models on Ad-hoc Retrieval
TLDR
The MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. Expand
Position-Aware Representations for Relevance Matching in Neural Information Retrieval
TLDR
This work investigates the use of similarity matrices that are able to encode position-specific information, including uni-gram term overlap as well as positional information such as proximity and term dependencies in query-document pairs. Expand
TREC CAR: A Data Set for Complex Answer Retrieval (Version 1.5)
  • 2017
...
1
2
...