Learning Antonyms with Paraphrases and a Morphology-Aware Neural Network

@inproceedings{Rajana2017LearningAW,
  title={Learning Antonyms with Paraphrases and a Morphology-Aware Neural Network},
  author={Sneha Rajana and Chris Callison-Burch and Marianna Apidianaki and Vered Shwartz},
  booktitle={*SEMEVAL},
  year={2017}
}
Recognizing and distinguishing antonyms from other types of semantic relations is an essential part of language understanding systems. [] Key Method We further propose a neural network model, AntNET, that integrates morphological features indicative of antonymy into a path-based relation detection algorithm. We demonstrate that our model outperforms state-of-the-art models in distinguishing antonyms from other semantic relations and is capable of efficiently handling multi-word expressions.

Figures and Tables from this paper

Antonym-Synonym Classification Based on New Sub-space Embeddings

TLDR
Experimental results show that the proposed model outperforms existing research on antonym synonym distinction in both speed and performance.

Training on Lexical Resources

We propose using lexical resources (thesaurus, VAD) to fine-tune pretrained deep nets such as BERT and ERNIE. Then at inference time, these nets can be used to distinguish synonyms from antonyms, as

A Mixture-of-Experts Model for Antonym-Synonym Discrimination

TLDR
This paper proposes two underlying hypotheses and employs the mixture-of-experts framework as a solution to discrimination between antonyms and synonyms, which works on the basis of a divide-and-conquer strategy.

A study of semantic projection from single word terms to multi-word terms in the environment domain

TLDR
The process of constructing a list of semantically linked multi-word terms related to the environmental field through the extraction of semantic variants is described and it is found that contexts play an essential role in defining the relations between MWTs.

A Survey of the Usages of Deep Learning for Natural Language Processing

TLDR
An introduction to the field and a quick overview of deep learning architectures and methods is provided and a discussion of the current state of the art is provided along with recommendations for future research in the field.

Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations

TLDR
This work explores the role of ”negation” in customer service interactions, particularly applied to sentiment analysis, and proposes an antonym dictionary based method for negation applied to a combination CNN-LSTM for sentiment analysis.

References

SHOWING 1-10 OF 41 REFERENCES

Distinguishing Antonyms and Synonyms in a Pattern-based Neural Network

TLDR
A novel neural network model AntSynNET is presented that exploits lexico-syntactic patterns from syntactic parse trees and successfully integrates the distance between the related words along the syntactic path as a new pattern feature.

Uncovering Distributional Differences between Synonyms and Antonyms in a Word Space Model

TLDR
It is demonstrated that using suitable features, differences in the contexts of synonymous and antonymous German adjective pairs can be identified with a simple word space model.

Word Embedding-based Antonym Detection using Thesauri and Distributional Information

TLDR
This paper proposes a novel approach to train word embeddings to capture antonyms by utilizing supervised synonym and antonym information from thesauri, as well as distributional information from large-scale unlabelled text data.

Computing Word-Pair Antonymy

TLDR
A new automatic and empirical measure of antonymy that combines corpus statistics with the structure of a published thesaurus is presented, obtaining a precision of over 80%.

Combining Word Patterns and Discourse Markers for Paradigmatic Relation Classification

TLDR
It is demonstrated that statistics over discourse relations, collected via explicit discourse markers as proxies, can be utilized as salient indicators for paradigmatic relations in multiple languages, outperforming patterns in terms of recall and F1-score.

Adding Semantics to Data-Driven Paraphrasing

TLDR
This work automatically assign semantic entailment relations to entries in PPDB using features derived from past work on discovering inference rules from text and semantic taxonomy induction, and demonstrates that this model assigns these relations with high accuracy.

Integrating Distributional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction

TLDR
A novel vector representation that integrates lexical contrast into distributional vectors and strengthens the most salient features for determining degrees of word similarity and integrated into the objective function of a skip-gram model is proposed.

Learning Morphology with Morfette

TLDR
Morfette is a modular, data-driven, probabilistic system which learns to perform joint morphological tagging and lemmatization from morphologically annotated corpora with high accuracy with no language-specific feature engineering or additional resources.

Extracting Paraphrases from a Parallel Corpus

TLDR
This work presents an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text that yields phrasal and single word lexical paraphrasing as well as syntactic paraphrase.

Semantic Taxonomy Induction from Heterogenous Evidence

TLDR
This work proposes a novel algorithm for inducing semantic taxonomies that flexibly incorporates evidence from multiple classifiers over heterogenous relationships to optimize the entire structure of the taxonomy, using knowledge of a word's coordinate terms to help in determining its hypernyms, and vice versa.