• Publications
  • Influence
Embeddings for Word Sense Disambiguation: An Evaluation Study
TLDR
We propose different methods through which word embeddings can be leveraged in a state-of-the-art supervised WSD system architecture, and perform a deep analysis of how different parameters affect performance. Expand
  • 203
  • 31
  • PDF
SensEmbed: Learning Sense Embeddings for Word and Relational Similarity
TLDR
We propose a multifaceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement, reporting state-of-the-art performance on multiple datasets. Expand
  • 259
  • 20
  • PDF
Embedding Words and Senses Together via Joint Knowledge-Enhanced Training
TLDR
We propose SW2V (Senses and Words to Vectors), a neural model which learns vector representations for words and senses in a joint training phase by exploiting both text corpora and knowledge from semantic networks. Expand
  • 70
  • 14
  • PDF
LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories
TLDR
We explore the capabilities of a bidirectional LSTM model to learn representations of word senses from semantically annotated corpora. Expand
  • 9
  • PDF
Show Us the Way: Learning to Manage Dialog from Demonstrations
TLDR
We present a novel dialog system framework with special focus on reinforcement learning (RL) algorithms, enabling direct comparison between the many possible architectures for dialog systems thanks to its end-to-end evaluation. Expand
  • 2
  • PDF
Auxiliary Capsules for Natural Language Understanding
TLDR
We extend Capsule Networks for NLU to a multi-task learning environment with the aid of Named Entity Recognition (NER) and Part of Speech tagging tasks. Expand
  • 1
Conversation Graph: Data Augmentation, Training, and Evaluation for Non-Deterministic Dialogue Management
TLDR
We propose the Conversation Graph (ConvGraph), a graph-based representation of dialogues that can be exploited for data augmentation, multi- reference training and evaluation of non- deterministic agents. Expand
Compositional and Lexical Semantics in RoBERTa, BERT and DistilBERT: A Case Study on CoQA
TLDR
This paper studies the types of linguistic phenomena accounted for by language models in the context of a Conversational Question Answering (CoQA) task. Expand
Improving Commonsense Causal Reasoning by Adversarial Training and Data Augmentation
TLDR
We perform data augmentation using a discourse parser for detecting causally linked clauses in large text, and a generative language model for generating distractors. Expand
Semantic Representations of Word Senses and Concepts
TLDR
Representing the semantics of linguistic items in a machine-interpretable form has been a major goal of Natural Language Processing since its earliest days. Expand
...
1
2
...