Share This Author
Teaching Machines to Read and Comprehend
A new methodology is defined that resolves this bottleneck and provides large scale supervised reading comprehension data that allows a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure to be developed.
A Convolutional Neural Network for Modelling Sentences
A convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) is described that is adopted for the semantic modelling of sentences and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations.
Hybrid computing using a neural network with dynamic external memory
A machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer.
Recurrent Continuous Translation Models
We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences…
Reasoning about Entailment with Neural Attention
- Tim Rocktäschel, Edward Grefenstette, K. Hermann, Tomás Kociský, P. Blunsom
- Computer ScienceICLR
- 22 September 2015
This paper proposes a neural model that reads two sentences to determine entailment using long short-term memory units and extends this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases, and presents a qualitative analysis of attention weights produced by this model.
Neural Variational Inference for Text Processing
This paper introduces a generic variational inference framework for generative and conditional models of text, and constructs an inference network conditioned on the discrete text input to provide the variational distribution.
The NarrativeQA Reading Comprehension Challenge
A new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts are presented, designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience.
e-SNLI: Natural Language Inference with Natural Language Explanations
- Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, P. Blunsom
- Computer ScienceNeurIPS
- 4 December 2018
The Stanford Natural Language Inference dataset is extended with an additional layer of human-annotated natural language explanations of the entailment relations, which can be used for various goals, such as obtaining full sentence justifications of a model’s decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets.
Discovering Discrete Latent Topics with Neural Variational Inference
This paper presents alternative neural approaches to topic modelling by providing parameterisable distributions over topics which permit training by backpropagation in the framework of neural variational inference, and proposes a recurrent network that is able to discover a notionally unbounded number of topics, analogous to Bayesian non-parametric topic models.
Deep Learning for Answer Sentence Selection
This work proposes a novel approach to solving the answer sentence selection task via means of distributed representations, and learns to match questions with answers by considering their semantic encoding.