Corpus ID: 31196930

CUIS Team for NTCIR-13 AKG Task

@inproceedings{Lin2017CUISTF,
  title={CUIS Team for NTCIR-13 AKG Task},
  author={Xinshi Lin and hk Wai Lam and Shubham Sharma},
  year={2017}
}
This paper describes our approach for Actionable Knowledge Graph (AKG) task at NTCIR-13. Our ranking system scores each candidate property by combining semantic relevance to action and its document relevance in related entity text descriptions via a Dirichlet smoothing based language model. We employ supervised learning technique to improve performance by minimizing a simple position-sensitive loss function on our additional manually annotated training data from the dry run topics. Our best… Expand

Tables from this paper

Overview of NTCIR-13 Actionable Knowledge Graph ( AKG ) Task
This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG isExpand
CrossBERT: A Triplet Neural Architecture for Ranking Entity Properties
TLDR
This work proposes a new method for property ranking, CrossBERT, which builds on the Bidirectional Encoder Representations from Transformers (BERT) and creates a new triplet network structure on cross query-property pairs that is used to rank properties. Expand

References

SHOWING 1-3 OF 3 REFERENCES
Overview of NTCIR-13 Actionable Knowledge Graph ( AKG ) Task
This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG isExpand
Software Framework for Topic Modelling with Large Corpora
TLDR
This work describes a Natural Language Processing software framework which is based on the idea of document streaming, i.e. processing corpora document after document, in a memory independent fashion, and implements several popular algorithms for topical inference, including Latent Semantic Analysis and Latent Dirichlet Allocation in a way that makes them completely independent of the training corpus size. Expand
A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval
TLDR
This paper examines the sensitivity of retrieval performance to the smoothing parameters and compares several popular smoothing methods on different test collection. Expand