Author pages are created from data sourced from our academic publisher partnerships and public sources.

- Publications
- Influence

Exploration by Random Network Distillation

- Yuri Burda, H. Edwards, A. Storkey, O. Klimov
- Computer Science, Mathematics
- ICLR
- 30 October 2018

We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network… Expand

Large-Scale Study of Curiosity-Driven Learning

- Yuri Burda, H. Edwards, Deepak Pathak, A. Storkey, Trevor Darrell, Alexei A. Efros
- Mathematics, Computer Science
- ICLR
- 13 August 2018

Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not… Expand

Data Augmentation Generative Adversarial Networks

- A. Antoniou, A. Storkey, H. Edwards
- Computer Science, Mathematics
- ICLR
- 12 November 2017

Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation… Expand

Censoring Representations with an Adversary

- H. Edwards, A. Storkey
- Computer Science, Mathematics
- ICLR
- 18 November 2015

In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision… Expand

Three Factors Influencing Minima in SGD

- Stanislaw Jastrzebski, Zachary Kenton, +4 authors A. Storkey
- Computer Science, Mathematics
- ArXiv
- 13 November 2017

We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium… Expand

Towards a Neural Statistician

- H. Edwards, A. Storkey
- Mathematics, Computer Science
- ICLR
- 7 June 2016

An efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must… Expand

How to train your MAML

- A. Antoniou, H. Edwards, A. Storkey
- Computer Science, Mathematics
- ICLR
- 22 October 2018

The field of few-shot learning has recently seen substantial advancements. Most of these advancements came from casting few-shot learning as a meta-learning problem. Model Agnostic Meta Learning or… Expand

Probabilistic inference for solving discrete and continuous state Markov Decision Processes

- Marc Toussaint, A. Storkey
- Computer Science
- ICML '06
- 25 June 2006

Inference in Markov Decision Processes has recently received interest as a means to infer goals of an observed action, policy recognition, and also as a tool to compute policies. A particularly… Expand

The 2005 PASCAL Visual Object Classes Challenge

- M. Everingham, Andrew Zisserman, +31 authors J. Zhang
- Computer Science
- MLCW
- 11 April 2005

The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes (i.e. not… Expand

Probabilistic inference for solving (PO) MDPs

- Marc Toussaint, S. Harmeling, A. Storkey
- Computer Science
- 1 December 2006

The development of probabilistic inference techniques has made considerable progress in recent years, in particular with respect to exploiting the structure (e.g., factored, hierarchical or… Expand

- 100
- 17