Zero-Shot Open Entity Typing as Type-Compatible Grounding

@inproceedings{Zhou2018ZeroShotOE,
  title={Zero-Shot Open Entity Typing as Type-Compatible Grounding},
  author={Ben Zhou and Daniel Khashabi and Chen-Tse Tsai and Dan Roth},
  booktitle={EMNLP},
  year={2018}
}
The problem of entity-typing has been studied predominantly in supervised learning fashion, mostly with task-specific annotations (for coarse types) and sometimes with distant supervision (for fine types. [] Key Method Given a type taxonomy defined as Boolean functions of FREEBASE "types", we ground a given mention to a set of type-compatible Wikipedia entries and then infer the target mention's types using an inference algorithm that makes use of the types of these entries. We evaluate our system on a…

Figures and Tables from this paper

Improving Fine-grained Entity Typing with Entity Linking
TLDR
This paper uses entity linking to help with the fine-grained entity type classification process and proposes a deep neural model that makes predictions based on both the context and the information obtained from entity linking results.
An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing
TLDR
This paper empirically study three kinds of auxiliary information: context consistency, type hierarchy and background knowledge of types, and proposes a multi-source fusion model (MSF) targeting these sources.
Fine-grained Entity Typing via Label Reasoning
TLDR
The proposed Label Reasoning Network (LRN), which sequentially reasons fine-grained entity labels by discovering and exploiting label dependencies knowledge entailed in the data, utilizes an auto-regressive network to conduct deductive reasoning and a bipartite attribute graph to conduct inductive reasoning between labels.
Zero-shot Label-Aware Event Trigger and Argument Classification
TLDR
A zero-shot event extraction approach, which first identifies events with existing tools and then maps them to a given taxonomy of event types in a zero- shot manner, and leverage label representations induced by pre-trained language models, and map identified events to the target types via representation similarity.
Knowledge Graph Entity Typing with Contrastive Learning
  • Guozhen Zhu, Shunyu Yao
  • Computer Science
    2022 The 6th International Conference on Machine Learning and Soft Computing
  • 2022
TLDR
A novel model named Contrastive Entity Typing (CET) for KG entity tying is proposed that can better learn the mutual interactions among the entities with the same entity type and can fully utilize the hierarchical information in entity type trees by two contrastive learning modules.
Confidence-Aware Embedding for Knowledge Graph Entity Typing
TLDR
ConfE is proposed, a novel confidence-aware embedding approach for modeling the (entity, entity type) tuples, which takes tuple confidence into consideration for learning better embeddings.
Context-aware Entity Typing in Knowledge Graphs
TLDR
A novel method for knowledge graph entity typing is proposed by utilizing entities’ contextual information and a novel loss function is proposed to alleviate the false-negative problem during training.
Connecting Embeddings for Knowledge Graph Entity Typing
TLDR
A novel approach for KG entity typing is proposed which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge in KGs to infer missing entity type instances.
Improving Entity Linking by Encoding Type Information into Entity Embeddings
TLDR
This work proposes to encode fine-grained type information into entity embeddings to solve the problem of linking entity mentions in the text to the correct entities in the Knowledge Base.
Knowledge-Augmented Language Model and Its Application to Unsupervised Named-Entity Recognition
TLDR
The KALM work demonstrates that named entities (and possibly other types of world knowledge) can be modeled successfully using predictive learning and training on large corpora of text without any additional information.
...
1
2
3
4
...

References

SHOWING 1-10 OF 49 REFERENCES
Label Embedding for Zero-shot Fine-grained Named Entity Typing
TLDR
A label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings and a zero-shot learning framework that can predict both seen and previously unseen entity types are presented.
Building a Fine-Grained Entity Typing System Overnight for a New X (X = Language, Domain, Genre)
TLDR
This paper proposes a novel unsupervised entity typing framework by combining symbolic and distributional semantics, and develops a novel joint hierarchical clustering and linking algorithm to type all mentions using these representations.
Ultra-Fine Entity Typing
TLDR
A model that can predict ultra-fine types is presented, and is trained using a multitask objective that pools the authors' new head-word supervision with prior supervision from entity linking, and achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for newly-introduced datasets.
OTyper: A Neural Architecture for Open Named Entity Typing
TLDR
This work introduces the task of Open Named Entity Typing (ONET), which is NET when the set of target types is not known in advance, and proposes a neural network architecture for ONET, called OTyper, and evaluates its ability to tag entities with types not seen in training.
AFET: Automatic Fine-Grained Entity Typing by Hierarchical Partial-Label Embedding
TLDR
This paper proposes a novel embedding method to separately model “clean” and “noisy” mentions, and incorporates the given type hierarchy to induce loss functions.
Context-Dependent Fine-Grained Entity Type Tagging
TLDR
This work proposes the task of context-dependent fine type tagging, where the set of acceptable labels for a mention is restricted to only those deducible from the local context (e.g. sentence or document).
Fine-Grained Entity Type Classification by Jointly Learning Representations and Label Embeddings
TLDR
This work proposes a neural network model that jointly learns entity mentions and their context representation to eliminate use of hand crafted features and outperforms previous state-of-the-art methods on two publicly available datasets.
Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss
TLDR
This work proposes an end-to-end solution with a neural network model that uses a variant of cross-entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones in FETC.
HYENA: Hierarchical Type Classification for Entity Names
TLDR
Very fine-grained types organized in a hierarchical taxonomy, with several hundreds of types at different levels, are addressed, including gazetteer features and the practical viability of HYENA is demonstrated.
Neural Architectures for Fine-grained Entity Type Classification
TLDR
This work investigates several neural network architectures for fine-grained entity type classification and establishes that the attention mechanism learns to attend over syntactic heads and the phrase containing the mention, both of which are known to be strong hand-crafted features for this task.
...
1
2
3
4
5
...