Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Teaching Machines to Read and Comprehend
A new methodology is defined that resolves this bottleneck and provides large scale supervised reading comprehension data that allows a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure to be developed.
Reasoning about Entailment with Neural Attention
- Tim Rocktäschel, Edward Grefenstette, K. Hermann, Tomás Kociský, P. Blunsom
- Computer ScienceICLR
- 22 September 2015
This paper proposes a neural model that reads two sentences to determine entailment using long short-term memory units and extends this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases, and presents a qualitative analysis of attention weights produced by this model.
The NarrativeQA Reading Comprehension Challenge
A new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts are presented, designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience.
Latent Predictor Networks for Code Generation
A novel neural network architecture is presented which generates an output sequence conditioned on an arbitrary number of input functions and allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training.
Optimizing Performance of Recurrent Neural Networks on GPUs
It is demonstrated that by exposing parallelism between operations within the network, an order of magnitude speedup across a range of network sizes can be achieved over a naive implementation.
Learning and Evaluating General Linguistic Intelligence
This work analyzes state-of-the-art natural language understanding models and conducts an extensive empirical investigation to evaluate them against general linguistic intelligence criteria, and proposes a new evaluation metric based on an online encoding of the test data that quantifies how quickly an existing agent (model) learns a new task.
Semantic Parsing with Semi-Supervised Sequential Autoencoders
This work presents a novel semi-supervised approach for sequence transduction and applies it to semantic parsing tasks focusing on domains with limited access to labelled training data and extend those datasets with synthetically generated logical forms.
This work proposes an extension to the venerable Long Short-Term Memory in the form of mutual gating of the current input and the previous output, which affords the modelling of a richer space of interactions between inputs and their context.
The Neural Noisy Channel
- Lei Yu, P. Blunsom, Chris Dyer, Edward Grefenstette, Tomás Kociský
- Computer ScienceICLR
- 4 November 2016
Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use.
Pitfalls of Static Language Modelling
It is argued that now is the right time to rethink the static language modelling evaluation protocol, and develop adaptive language models that can remain up-to-date with respect to the ever-changing and non-stationary world.