• Publications
  • Influence
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
TLDR
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets.
MLP-Mixer: An all-MLP Architecture for Vision
TLDR
It is shown that while convolutions and attention are both sufficient for good performance, neither of them are necessary, and MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs), attains competitive scores on image classification benchmarks.
Are GANs Created Equal? A Large-Scale Study
TLDR
A neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures finds that most models can reach similar scores with enough hyperparameter optimization and random restarts, suggesting that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes.
ViViT: A Video Vision Transformer
TLDR
This work shows how to effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets, and achieves state-of-the-art results on multiple video classification benchmarks.
Assessing Generative Models via Precision and Recall
TLDR
A novel definition of precision and recall for distributions which disentangles the divergence into two separate dimensions is proposed which is intuitive, retains desirable properties, and naturally leads to an efficient algorithm that can be used to evaluate generative models.
Self-Supervised GANs via Auxiliary Rotation Loss
TLDR
This work allows the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game, and takes a step towards bridging the gap between conditional and unconditional GANs.
Fast and Provably Good Seedings for k-Means
TLDR
This work proposes a simple yet fast seeding algorithm that produces *provably* good clusterings even *without assumptions* on the data, and allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude.
A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark
Representation learning promises to unlock deep learning for the long tail of vision tasks without expensive labelled datasets. Yet, the absence of a unified evaluation for general visual
On Mutual Information Maximization for Representation Learning
TLDR
This paper argues, and provides empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone, and that they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parametrization of the employed MI estimators.
The GAN Landscape: Losses, Architectures, Regularization, and Normalization
TLDR
This work reproduces the current state of the art of GANs from a practical perspective, discusses common pitfalls and reproducibility issues, and goes beyond fairly exploring the GAN landscape.
...
1
2
3
4
5
...