• Publications
  • Influence
Corpus-level Fine-grained Entity Typing Using Contextual Information
TLDR
We address the problem of corpus-level entity typing, i.e., inferring from a large corpus that an entity is a member of a class such as "food" or "artist". Expand
  • 60
  • 7
  • PDF
Recurrent One-Hop Predictions for Reasoning over Knowledge Graphs
TLDR
We present ROPs, recurrent one-hop predictors, that predict entities at each step of mh-KB paths by using recurrent neural networks and vector representations of entities and relations, with two benefits: (i) modeling m h-paths of arbitrary lengths while updating the entity and relation representations by the training signal at a unified framework. Expand
  • 13
  • 4
  • PDF
Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings
TLDR
We present a large dataset based on manual Wikipedia annotations and word senses, where word senses from different words are related by semantic classes. Expand
  • 12
  • 3
  • PDF
Corpus-level Fine-grained Entity Typing
TLDR
This paper addresses the problem of corpus-level entity typing, i.e., inferring from a large corpus that an entity is member of a class such as "food" or "artist". Expand
  • 16
  • 2
  • PDF
Noise Mitigation for Neural Entity Typing and Relation Extraction
TLDR
We introduce multi-instance multi-label learning algorithms using neural network models, and apply them to fine-grained entity typing for the first time. Expand
  • 39
  • 1
  • PDF
Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities
TLDR
We present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity name) and entity (entity embeddings). Expand
  • 28
  • 1
  • PDF
Intrinsic Subspace Evaluation of Word Embedding Representations
TLDR
We introduce a new methodology for intrinsic evaluation of word representations. Expand
  • 26
  • 1
  • PDF
Robust Natural Language Inference Models with Example Forgetting
TLDR
We investigate whether example forgetting, a recently introduced measure of hardness of examples, can be used to select training examples in order to increase robustness of natural language understanding models in a natural language inference task (MNLI). Expand
  • 12
  • PDF
Toward Understanding The Effect Of Loss function On Then Performance Of Knowledge Graph Embedding
TLDR
We show that existing theories corresponding to the limitations of TransE are inaccurate because they ignore the effect of loss function. Expand
  • 5
ISO-TimeML Event Extraction in Persian Text
TLDR
Recognizing TimeML events and identifying their attributes, are important tasks in natural language processing (NLP). Expand
  • 7
  • PDF
...
1
2
3
...