Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
- Francesco Locatello, Stefan Bauer, Mario Lucic, S. Gelly, B. Schölkopf, Olivier Bachem
- Computer ScienceICML
- 29 November 2018
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets.
MLP-Mixer: An all-MLP Architecture for Vision
It is shown that while convolutions and attention are both sufficient for good performance, neither of them are necessary, and MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs), attains competitive scores on image classification benchmarks.
Are GANs Created Equal? A Large-Scale Study
- Mario Lucic, Karol Kurach, Marcin Michalski, S. Gelly, O. Bousquet
- Computer ScienceNeurIPS
- 28 November 2017
A neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures finds that most models can reach similar scores with enough hyperparameter optimization and random restarts, suggesting that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes.
ViViT: A Video Vision Transformer
- Anurag Arnab, M. Dehghani, G. Heigold, Chen Sun, Mario Lucic, C. Schmid
- Computer ScienceIEEE/CVF International Conference on Computer…
- 29 March 2021
This work shows how to effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets, and achieves state-of-the-art results on multiple video classification benchmarks.
Assessing Generative Models via Precision and Recall
- Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, O. Bousquet, S. Gelly
- Computer ScienceNeurIPS
- 31 May 2018
A novel definition of precision and recall for distributions which disentangles the divergence into two separate dimensions is proposed which is intuitive, retains desirable properties, and naturally leads to an efficient algorithm that can be used to evaluate generative models.
Self-Supervised GANs via Auxiliary Rotation Loss
- Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, N. Houlsby
- Computer ScienceIEEE/CVF Conference on Computer Vision and…
- 27 November 2018
This work allows the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game, and takes a step towards bridging the gap between conditional and unconditional GANs.
Fast and Provably Good Seedings for k-Means
This work proposes a simple yet fast seeding algorithm that produces *provably* good clusterings even *without assumptions* on the data, and allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude.
A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark
Representation learning promises to unlock deep learning for the long tail of vision tasks without expensive labelled datasets. Yet, the absence of a unified evaluation for general visual…
On Mutual Information Maximization for Representation Learning
- M. Tschannen, Josip Djolonga, Paul K. Rubenstein, S. Gelly, Mario Lucic
- Computer ScienceICLR
- 31 July 2019
This paper argues, and provides empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone, and that they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parametrization of the employed MI estimators.
The GAN Landscape: Losses, Architectures, Regularization, and Normalization
- Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, S. Gelly
- Computer ScienceArXiv
- 5 June 2018
This work reproduces the current state of the art of GANs from a practical perspective, discusses common pitfalls and reproducibility issues, and goes beyond fairly exploring the GAN landscape.