• Publications
  • Influence
Translating Embeddings for Modeling Multi-relational Data
TLDR
We propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Expand
  • 2,496
  • 888
  • PDF
Natural Language Processing (Almost) from Scratch
TLDR
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. Expand
  • 5,786
  • 630
  • PDF
Learning with Local and Global Consistency
TLDR
A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. Expand
  • 3,540
  • 544
  • PDF
Gene Selection for Cancer Classification using Support Vector Machines
TLDR
We propose a new method of gene selection utilizing Support Vector Machine methods based on Recursive Feature Elimination (RFE) and demonstrate that the genes selected by our techniques yield better classification performance and are biologically relevant to cancer. Expand
  • 4,763
  • 509
  • PDF
A unified architecture for natural language processing: deep neural networks with multitask learning
TLDR
We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. Expand
  • 4,228
  • 236
  • PDF
A Neural Attention Model for Abstractive Sentence Summarization
TLDR
In this work, we propose a fully data-driven approach to abstractive sentence summarization. Expand
  • 1,649
  • 212
  • PDF
End-To-End Memory Networks
TLDR
We introduce a neural network with a recurrent attention model over a possibly large external memory. Expand
  • 1,621
  • 203
  • PDF
Reading Wikipedia to Answer Open-Domain Questions
TLDR
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. Expand
  • 776
  • 200
  • PDF
Curriculum learning
TLDR
We formalize such training strategies in the context of machine learning, and call them "curriculum learning". Expand
  • 2,201
  • 189
  • PDF
A kernel method for multi-labelled classification
TLDR
This article presents a Support Vector Machine like learning system to handle multi-label problems. Expand
  • 1,138
  • 143
  • PDF