• Publications
  • Influence
Wasserstein Auto-Encoders
We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the modelExpand
  • 331
  • 76
  • PDF
Combining online and offline knowledge in UCT
The UCT algorithm learns a value function online using sample-based search. The TD(λ) algorithm can learn a value function offline for the on-policy distribution. We consider three approaches forExpand
  • 509
  • 63
  • PDF
Are GANs Created Equal? A Large-Scale Study
Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard toExpand
  • 429
  • 58
  • PDF
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervisedExpand
  • 279
  • 43
  • PDF
Modification of UCT with Patterns in Monte-Carlo Go
Algorithm UCB1 for multi-armed bandit problem has already been extended to Algorithm UCT (Upper bound Confidence for Tree) which works for minimax tree search. We have developed a Monte-Carlo GoExpand
  • 348
  • 29
  • PDF
Assessing Generative Models via Precision and Recall
Recent advances in generative modeling have led to an increased interest in the study of statistical divergences as means of model comparison. Commonly used evaluation methods such as FrechetExpand
  • 91
  • 23
  • PDF
Monte-Carlo tree search and rapid action value estimation in computer Go
A new paradigm for search, based on Monte-Carlo simulation, has revolutionised the performance of computer Go programs. In this article we describe two extensions to the Monte-Carlo tree searchExpand
  • 264
  • 19
  • PDF
Parameter-Efficient Transfer Learning for NLP
Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model isExpand
  • 83
  • 18
  • PDF
AdaGAN: Boosting Generative Models
Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer fromExpand
  • 135
  • 17
  • PDF
The grand challenge of computer Go: Monte Carlo tree search and extensions
The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked soExpand
  • 181
  • 16
  • PDF