• Publications
  • Influence
ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation
TLDR
A deep architecture that is able to run in real time while providing accurate semantic segmentation, and a novel layer that uses residual connections and factorized convolutions in order to remain efficient while retaining remarkable accuracy is proposed.
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
TLDR
SegFormer is presented, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders and shows excellent zero-shot robustness on Cityscapes-C.
Invertible Conditional GANs for image editing
TLDR
This work evaluates encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation, which allows to reconstruct and modify real images of faces conditioning on arbitrary attributes.
Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion
TLDR
DeepInversion is introduced, a new method for synthesizing images from the image distribution used to train a deep neural network, which optimizes the input while regularizing the distribution of intermediate feature maps using information stored in the batch normalization layers of the teacher.
Road Detection Based on Illuminant Invariance
By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting
Learning the Number of Neurons in Deep Networks
TLDR
This paper proposes to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron, and shows that this approach can reduce the number of parameters by up to 80\% while retaining or even improving the network accuracy.
Learning Image Matching by Simply Watching Video
TLDR
An unsupervised learning based approach to the ubiquitous computer vision problem of image matching that achieves surprising performance comparable to traditional empirically designed methods.
See through Gradients: Image Batch Recovery via GradInversion
TLDR
It is shown that gradients encode a surprisingly large amount of information, such that all the individual images can be recovered with high fidelity via GradInversion, even for complex datasets, deep networks, and large batch sizes.
Compression-aware Training of Deep Networks
TLDR
It is shown that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.
Less Is More: Towards Compact CNNs
TLDR
This work shows that, by incorporating sparse constraints into the objective function, it is possible to decimate the number of neurons during the training stage, thus theNumber of parameters and the memory footprint of the neural network are reduced, which is desirable at the test time.
...
...