Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking

  title={Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking},
  author={Shikhar Murty and Pat Verga and Luke Vilnis and Irena Radovanovic and Andrew McCallum},
  booktitle={Annual Meeting of the Association for Computational Linguistics},
Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. [] Key Result In experiments on all three datasets we show substantial gains from hierarchy-aware training.

Figures and Tables from this paper

Fine-Grained Entity Typing for Domain Independent Entity Linking

This work tackles the problem of building robust entity linking models that generalize effectively and do not rely on labeled entity linking data with a specific entity distribution, and models fine-grained entity properties, which can help disambiguate between even closely related entities.

Multi-stage Knowledge Enhancement for Ultra-fine-grained Entity Typing

  • Yuxiao YangNing Li
  • Computer Science
    2022 5th International Symposium on Autonomous Systems (ISAS)
  • 2022
A multi-stage method that injects knowledge into both pre-trained contextualized word representations and entity typing structure to deal with inconsistency of distantly labeled data from different knowledge sources is proposed.

Deep Entity Linking via Eliminating Semantic Ambiguity With BERT

This paper introduces the advanced language representation model called BERT (Bidirectional Encoder Representations from Transformers) and designs a hard negative samples mining strategy to fine-tune it accordingly, and is the first to equip entity linking task with the powerful pre-trained general language model by deliberately tackling its potential shortcoming of learning literally.

Modeling Fine-Grained Entity Types with Box Embeddings

This work studies the ability of box embeddings, which embed concepts as d-dimensional hyperrectangles, to capture hierarchies of types even when these relationships are not defined explicitly in the ontology.

Description-Based Zero-shot Fine-Grained Entity Typing

This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types to recognize novel types without additional training.

LATTE: Latent Type Modeling for Biomedical Entity Linking

LATTE is proposed, a LATent Type Entity Linking model, that improves entity linking by modeling the latent fine-grained type information about mentions and entities and achieves significant performance improvements over several state-of-the-art techniques.

Learning from Knowledge Graphs: Neural Fine-Grained Entity Typing with Copy-Generation Networks

A novel deep neural model called CopyFet is proposed for FET via a copy-generation mechanism which can identify the semantic type of a mention with reference to the type-copying vocabulary from a knowledge graph in the copy model.

Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model

This paper proposes to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM), and constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention which can be used as type labels.

Knowledge-Rich Self-Supervised Entity Linking

Entity linking faces significant challenges, such as prolific variations and prevalent ambiguities, especially in high-value domains with myriad entities. Standard classification approaches suffer

Fine-Grained Entity Typing with Hierarchical Inference

  • Quan Ren
  • Computer Science
    2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)
  • 2020
This work proposes a neural model that can effectively capture entity information from the context and the mention aspects and hierarchical inference, which hierarchically infers types layer by layer in type set.



Universal schema for entity type prediction

A universal schema approach to fine-grained entity type prediction that robustly learns mutual implication among this large union of textual surface patterns by learning latent vector embeddings from probabilistic matrix factorization, thus avoiding the need for hand-labeled data.

Context-Dependent Fine-Grained Entity Type Tagging

This work proposes the task of context-dependent fine type tagging, where the set of acceptable labels for a mention is restricted to only those deducible from the local context (e.g. sentence or document).

Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding

A global objective is formulated for learning the embeddings from text corpora and knowledge bases, which adopts a novel margin-based loss that is robust to noisy labels and faithfully models type correlation derived from knowledge bases.

Entity Linking via Joint Encoding of Types, Descriptions, and Context

This work presents a neural, modular entity linking system that learns a unified dense representation for each entity using multiple sources of information, such as its description, contexts around its mentions, and its fine-grained types.

FINET: Context-Aware Fine-Grained Named Entity Typing

FINET generates candidate types using a sequence of multiple extractors, ranging from explicitly mentioned types to implicit types, and subsequently selects the most appropriate using ideas from word-sense disambiguation, and supports the most fine-grained type system so far.

Fine-Grained Entity Recognition

A fine-grained set of 112 tags is defined, the tagging problem is formulates as multi-class, multi-label classification, an unsupervised method for collecting training data is described, and the FIGER implementation is presented.

Corpus-level Fine-grained Entity Typing

This paper addresses the problem of corpus-level entity typing, i.e., inferring from a large corpus that an entity is a member of a class such as "food" or "artist" and proposes FIGMENT, an embedding- based and combines a global model that scores based on aggregated contextual information of an entities and a context model that first scores the individual occurrences of an entity and then aggregates the scores.

HYENA: Hierarchical Type Classification for Entity Names

Very fine-grained types organized in a hierarchical taxonomy, with several hundreds of types at different levels, are addressed, including gazetteer features and the practical viability of HYENA is demonstrated.

Generalizing to Unseen Entities and Entity Pairs with Row-less Universal Schema

This paper proposes an approach having no per-row parameters at all, and demonstrates that despite having an order of magnitude fewer parameters than traditional universal schema, this approach can match the accuracy of the traditional model, and can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.

Noise Mitigation for Neural Entity Typing and Relation Extraction

This paper introduces multi-instance multi-label learning algorithms using neural network models, and applies them to fine-grained entity typing for the first time, and shows that probabilistic predictions are more robust than discrete predictions and that joint training of the two tasks performs best.