CrossBERT: A Triplet Neural Architecture for Ranking Entity Properties

@article{Manotumruksa2020CrossBERTAT,
  title={CrossBERT: A Triplet Neural Architecture for Ranking Entity Properties},
  author={Jarana Manotumruksa and Jeffrey Dalton and Edgar Meij and Emine Yilmaz},
  journal={Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval},
  year={2020}
}
Task-based Virtual Personal Assistants (VPAs) such as the Google Assistant, Alexa, and Siri are increasingly being adopted for a wide variety of tasks. These tasks are grounded in real-world entities and actions (e.g., book a hotel, organise a conference, or requesting funds). In this work we tackle the task of automatically constructing actionable knowledge graphs in response to a user query in order to support a wider variety of increasingly complex assistant tasks. We frame this as an entity… 

Figures and Tables from this paper

BERT-ER: Query-specific BERT Entity Representations for Entity Ranking

TLDR
This work presents BERT Entity Representations (BERT-ER) which are query-specific vector representations of entities obtained from text that describes how an entity is relevant for a query, and shows that the entity ranking system using BERT-ER can increase precision at the top of the ranking by promoting relevant entities to the top.

An Entity-Oriented Approach for Answering Topical Information Needs

TLDR
This dissertation aims to study the interplay between text and entities by addressing three related prediction problems: to identify knowledge base entities that are relevant for the query, understand an entity’s meaning in the context of the queries, and identify text passages that elaborate the connection between the query and an entity.

Predicting Guiding Entities for Entity Aspect Linking

TLDR
This approach uses a supervised neural entity ranking system to predict relevant entities for the context and these entities are then used to guide the system to the correct aspect of an entity in the given context.

References

SHOWING 1-10 OF 20 REFERENCES

CUIS Team for NTCIR-13 AKG Task

TLDR
This paper describes the approach for Actionable Knowledge Graph (AKG) task at NTCIR-13, and employs supervised learning technique to improve performance by minimizing a simple position-sensitive loss function on additional manually annotated training data from the dry run topics.

CEDR: Contextualized Embeddings for Document Ranking

TLDR
This work investigates how two pretrained contextualized language models (ELMo and BERT) can be utilized for ad-hoc document ranking and proposes a joint approach that incorporates BERT's classification vector into existing neural models and shows that it outperforms state-of-the-art ad-Hoc ranking baselines.

End-to-End Neural Ad-hoc Ranking with Kernel Pooling

TLDR
K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level soft match features, and a learning-to-rank layer that combines those features into the final ranking score.

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

TLDR
Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity is presented.

Query Understanding via Entity Attribute Identification

TLDR
This study introduces the task of entity attribute identification and proposes two methods to address it: a model based on Markov Random Field, and a learning to rank model.

Mining, Ranking and Recommending Entity Aspects

TLDR
This paper proposes an approach that mines, clusters, and ranks entity aspects from query logs, and proposes two approaches based on semantic relatedness and aspect transitions within user sessions that find that a combined approach gives the best performance.

Query-Task Mapping

TLDR
This work addresses the natural next step: mapping a currently submitted query to an appropriate task in an already task-split log, and shows that the fast and accurate inverted index-based method forms a strong baseline.

Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset

TLDR
This work introduces the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains, and presents a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots provided as input.

Co-PACRR: A Context-Aware Neural IR Model for Ad-hoc Retrieval

TLDR
This work highlights three potential shortcomings caused by not considering context information and proposes three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer that yields Co-PACER, a novel context-aware neural IR model.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.