Corpus ID: 229345009

Semantic Answer Type Prediction using BERT IAI at the ISWC SMART Task 2020

@article{Setty2020SemanticAT,
  title={Semantic Answer Type Prediction using BERT IAI at the ISWC SMART Task 2020},
  author={Vinay Setty and Krisztian Balog},
  journal={ArXiv},
  year={2020},
  volume={abs/2109.06714}
}
This paper summarizes our participation in the SMART Task of the ISWC 2020 Challenge. A particular question we are interested in answering is how well neural methods, and specifically transformer models, such as BERT, perform on the answer type prediction task compared to traditional approaches. Our main finding is that coarse-grained answer types can be identified effectively with standard text classification methods, with over 95% accuracy, and BERT can bring only marginal improvements. For… Expand

Tables from this paper

Hierarchical Expected Answer Type Classification for Question Answering
TLDR
This paper presents a Web user interface and a RESTful API for the hierarchical EAT classification over DBpedia that enables end-users to get the EAT predictions for 104 languages, see the confidence of the prediction, and leave feedback. Expand
Open Domain Question Answering over Knowledge Graphs Using Keyword Search, Answer Type Prediction, SPARQL and Pre-trained Neural Models
TLDR
This paper describes the role of QA in that context, and details and evaluates a pipeline for QA that involves a general purpose entity search service over RDF, answer type prediction, entity enrichment through SPARQL, and pre-trained neural models. Expand

References

SHOWING 1-10 OF 11 REFERENCES
Multi-Task Learning for Conversational Question Answering over a Large-Scale Knowledge Base
TLDR
This work proposes an innovative multi-task learning framework where a pointer-equipped semantic parsing model is designed to resolve coreference in conversations, and naturally empower joint learning with a novel type-aware entity detection model. Expand
Hierarchical target type identification for entity-oriented queries
TLDR
The task of automatically annotating queries with target types from an ontology, and it is argued that it is best viewed as a ranking problem, and multiple evaluation metrics are proposed. Expand
Design Patterns for Fusion-Based Object Retrieval
TLDR
This work presents two design patterns, i.e., general reusable retrieval strategies, which are able to encompass most existing approaches from the past and demonstrate the generality of these patterns by applying them to three different object retrieval tasks: expert finding, blog distillation, and vertical ranking. Expand
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks. Expand
Target Type Identification for Entity-Bearing Queries
TLDR
This work addresses the problem of automatically detecting the target types of a query with respect to a type taxonomy and proposes a supervised learning approach with a rich variety of features that outperforms existing methods by a remarkable margin. Expand
XLNet: Generalized Autoregressive Pretraining for Language Understanding
TLDR
XLNet is proposed, a generalized autoregressive pretraining method that enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and overcomes the limitations of BERT thanks to its autore progressive formulation. Expand
RoBERTa: A Robustly Optimized BERT Pretraining Approach
TLDR
It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD. Expand
Building Watson: An Overview of the DeepQA Project
TLDR
The results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA. Expand
The TREC question answering track
The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far,...
SeMantic AnsweR Type prediction task (SMART) at ISWC 2020 Semantic Web Challenge
TLDR
The SeMantic AnsweR Type prediction task (SMART) was part of ISWC 2020 challenges and can play a key role in knowledge base question answering systems providing insights that are helpful to generate correct queries or rank the answer candidates. Expand
...
1
2
...