• Publications
  • Influence
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed. Expand
Occupancy Networks: Learning 3D Reconstruction in Function Space
TLDR
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input. Expand
On feature combination for multiclass object classification
  • P. Gehler, S. Nowozin
  • Computer Science
  • IEEE 12th International Conference on Computer…
  • 1 September 2009
TLDR
Several models that aim at learning the correct weighting of different features from training data are studied, including multiple kernel learning as well as simple baseline methods and ensemble methods inspired by Boosting are derived. Expand
Which Training Methods for GANs do actually Converge?
TLDR
This paper describes a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent, and extends convergence results to more general GANs and proves local convergence for simplified gradient penalties even if the generator and data distribution lie on lower dimensional manifolds. Expand
Instructing people for training gestural interactive systems
TLDR
The results of the qualitative and quantitative analysis indicate that the choice of modality has a significant impact on the performance of the learnt gesture recognition system; particularly in terms of correctness and coverage. Expand
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
TLDR
A large-scale benchmark of existing state-of-the-art methods on classification problems and the effect of dataset shift on accuracy and calibration is presented, finding that traditional post-hoc calibration does indeed fall short, as do several other previous methods. Expand
DeepCoder: Learning to Write Programs
TLDR
The approach is to train a neural network to predict properties of the program that generated the outputs from the inputs to augment search techniques from the programming languages community, including enumerative search and an SMT-based solver. Expand
DSAC — Differentiable RANSAC for Camera Localization
TLDR
DSAC is applied to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches, and it is demonstrated that by directly minimizing the expected loss of the output camera poses, robustly estimated by RANSAC, it achieves an increase in accuracy. Expand
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
TLDR
Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). Expand
The Numerics of GANs
TLDR
This paper analyzes the numerics of common algorithms for training Generative Adversarial Networks (GANs) and designs a new algorithm that overcomes some of these limitations and has better convergence properties. Expand
...
1
2
3
4
5
...