• Publications
  • Influence
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. Expand
Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields
TLDR
Coulomb GANs are introduced, which pose the GAN learning problem as a potential field of charged particles, where generated samples are attracted to training set samples but repel each other, and it is proved that Coulomb GAns possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution. Expand
Hopfield Networks is All You Need
TLDR
A new PyTorch layer is provided, called "Hopfield", which allows to equip deep learning architectures with modern Hopfield networks as a new powerful concept comprising pooling, memory, and attention. Expand
Modern Hopfield Networks and Attention for Immune Repertoire Classification
TLDR
This work presents a novel method DeepRC that integrates transformer-like attention, or equivalently modern Hopfield networks, into deep learning architectures for massive MIL such as immune repertoire classification, and demonstrates that DeepRC outperforms all other methods with respect to predictive performance on large-scale experiments. Expand
A GAN based solver of black-box inverse problems
We propose a GAN based approach to solve inverse problems which have nondifferentiable or even black-box forward relations. The idea is to find solutions via an adversarial game where the generatorExpand
Two Time-Scale Update Rule for Generative Adversarial Nets
TLDR
A two time-scale update rule (TTUR) is proposed for training GANs with different learning rates for the discriminator and the generator that outperforms conventional GAN training both in learning time and performance. Expand
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
TLDR
CLOOB consistently outperforms CLIP at zero-shot transfer learning across all considered architectures and datasets and is compared after learning on the Conceptual Captions and the YFCC dataset with respect to their zero- shot transfer learning performance on other datasets. Expand
About gradient based importance weighting in feed-forward artificial neural networks
Training artificial neural networks is hard. To achieve high predictive capabilities on previously unseen data, artificial neural networks need a big amount of samples to train on. And it gets evenExpand