MZET: Memory Augmented Zero-Shot Fine-grained Named Entity Typing

@article{Zhang2020MZETMA,
  title={MZET: Memory Augmented Zero-Shot Fine-grained Named Entity Typing},
  author={T. Zhang and Congying Xia and Chun-Ta Lu and Philip S. Yu},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.01267}
}
Named entity typing (NET) is a classification task of assigning an entity mention in the context with given semantic types. However, with the growing size and granularity of the entity types, few previous researches concern with newly emerged entity types. In this paper, we propose MZET, a novel memory augmented FNET (Fine-grained NET) model, to tackle the unseen types in a zero-shot manner. MZET incorporates character-level, word-level, and contextural-level information to learn the entity… 

Figures and Tables from this paper

Low-shot Learning in Natural Language Processing

Diverse low-shot learning approaches, including capsule-based networks, data-augmentation methods, and memory networks, are discussed for different NLP tasks, for example, intent detection and named entity typing.

Example-Based Named Entity Recognition

A train-free few-shot learning approach takes inspiration from question-answering to identify entity spans in a new and unseen domain that performs significantly better than the current state-of-the-art, especially when using a low number of support examples.

Sequential Recommendation via Stochastic Self-Attention

A novel Wasserstein Self-Attention module is devised to characterize item-item position-wise relationships in sequences, which effectively incorporates uncertainty into model training.

An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing

This paper empirically study three kinds of auxiliary information: context consistency, type hierarchy and background knowledge of types, and proposes a multi-source fusion model (MSF) targeting these sources.

Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing

This work proposes a type-enriched hierarchical contrastive strategy for FET that can directly model the differences between hierarchical types and improve the ability to distinguish multi-grained similar types.

Few-Shot Fine-Grained Entity Typing with Automatic Label Interpretation and Instance Generation

A novel framework for few-shot Fine-grained Entity Typing consisting of an entity type label interpretation module automatically learns to relate type labels to the vocabulary by jointly leveraging few- shot instances and the label hierarchy, and a type-based contextualized instance generator produces new instances based on given instances to enlarge the training set for better generalization.

A Meta-framework for Spatiotemporal Quantity Extraction from Text

This paper formulates the NLP problem of spatiotemporal quantity extraction, and proposes the first meta-framework for solving it, which contains a formalism that decomposes the problem into several information extraction tasks, a shareable crowdsourcing pipeline, and transformer-based baseline models.

Probing Pre-trained Auto-regressive Language Models for Named Entity Typing and Recognition

A new methodology to probe auto-regressive LMs for NET and NER generalization is proposed, which draws inspiration from human linguistic behavior, by resorting to meta-learning and introduces a novel procedure to assess the model's memorization of NEs and report the memorization’s impact on the results.

Semantic Context Path Labeling for Semantic Exploration of User Reviews

A novel method to perform semantic exploration of user reviews by defining a new Information Extraction task called Semantic Context Path (SCP) labeling, which simultaneously assigns types and semantic roles to entity mentions.

PDALN: Progressive Domain Adaptation over a Pre-trained Model for Low-Resource Cross-Domain Named Entity Recognition

PDALN can effectively adapt high-resource domains to low-resource target domains, even if they are diverse in terms and writing styles, and comparison with other baselines indicates the state-of-the-art performance of PDALN.

References

SHOWING 1-10 OF 35 REFERENCES

Label Embedding for Zero-shot Fine-grained Named Entity Typing

A label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings and a zero-shot learning framework that can predict both seen and previously unseen entity types are presented.

Description-Based Zero-shot Fine-Grained Entity Typing

This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types to recognize novel types without additional training.

Fine-Grained Entity Type Classification by Jointly Learning Representations and Label Embeddings

This work proposes a neural network model that jointly learns entity mentions and their context representation to eliminate use of hand crafted features and outperforms previous state-of-the-art methods on two publicly available datasets.

Zero-Shot Open Entity Typing as Type-Compatible Grounding

A zero-shot entity typing approach that requires no annotated data and can flexibly identify newly defined types is proposed that is shown to be competitive with state-of-the-art supervised NER systems, and to outperform them on out- of-training datasets.

AFET: Automatic Fine-Grained Entity Typing by Hierarchical Partial-Label Embedding

This paper proposes a novel embedding method to separately model “clean” and “noisy” mentions, and incorporates the given type hierarchy to induce loss functions.

OTyper: A Neural Architecture for Open Named Entity Typing

This work introduces the task of Open Named Entity Typing (ONET), which is NET when the set of target types is not known in advance, and proposes a neural network architecture for ONET, called OTyper, and evaluates its ability to tag entities with types not seen in training.

Neural Architectures for Fine-grained Entity Type Classification

This work investigates several neural network architectures for fine-grained entity type classification and establishes that the attention mechanism learns to attend over syntactic heads and the phrase containing the mention, both of which are known to be strong hand-crafted features for this task.

FINET: Context-Aware Fine-Grained Named Entity Typing

FINET generates candidate types using a sequence of multiple extractors, ranging from explicitly mentioned types to implicit types, and subsequently selects the most appropriate using ideas from word-sense disambiguation, and supports the most fine-grained type system so far.

Multi-grained Named Entity Recognition

A novel framework for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested, MGNER detects and recognizes entities on multiple granularities and outperforms current state-of-the-art baselines.

Building a Fine-Grained Entity Typing System Overnight for a New X (X = Language, Domain, Genre)

This paper proposes a novel unsupervised entity typing framework by combining symbolic and distributional semantics, and develops a novel joint hierarchical clustering and linking algorithm to type all mentions using these representations.