• Publications
  • Influence
Hierarchical Embeddings for Hypernymy Detection and Directionality
This work presents a novel neural model HyperVec, an unsupervised measure where embeddings are learned in a specific order and capture the hypernym–hyponym distributional hierarchy, which outperforms both state-of-the-art un supervised measures and embedding models on hypernymy detection and directionality, and on predicting graded lexical entailment.
Clustering Verbs Semantically According to their Alternation Behaviour
Verbs were clustered semantically on the basis of their alternation behaviour, as characterised by their syntactic subcategorisation frames extracted from maximum probability parses of a robust
Diachronic Usage Relatedness (DURel): A Framework for the Annotation of Lexical Semantic Change
We propose a framework that extends synchronic polysemy annotation to diachronic changes in lexical meaning, to counteract the lack of resources for evaluating computational models of lexical
Chasing Hypernyms in Vector Spaces with Entropy
SLQS is a new entropy-based measure for the unsupervised identification of hypernymy and its directionality in Distributional Semantic Models (DSMs).
Experiments on the Automatic Induction of German Semantic Verb Classes
This article presents clustering experiments on German verbs: A statistical grammar model for German serves as the source for a distributional verb description at the lexical syntax-semantics
A Wind of Change: Detecting and Evaluating Lexical Semantic Change across Times and Domains
This work addresses the superficialness and lack of comparison in assessing models of diachronic lexical change, by bringing together and extending benchmark models on a common state-of-the-art evaluation task.
Multilingual Reliability and “Semantic” Structure of Continuous Word Spaces
The results show that (i) morphological complexity causes a drop in accuracy, and (ii) continuous representations lack the ability to solve analogies of paradigmatic relations.
Integrating Distributional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction
A novel vector representation that integrates lexical contrast into distributional vectors and strengthens the most salient features for determining degrees of word similarity and integrated into the objective function of a skip-gram model is proposed.
Distinguishing Antonyms and Synonyms in a Pattern-based Neural Network
A novel neural network model AntSynNET is presented that exploits lexico-syntactic patterns from syntactic parse trees and successfully integrates the distance between the related words along the syntactic path as a new pattern feature.
Acquiring Lexical Knowledge for Anaphora Resolution
This work discusses research aimed at improving the performance of anaphora resolution systems by acquiring the commonsense knowledge require to resolve the more complex cases ofAnaphora, such as bridging references.