• Publications
  • Influence
Wide Residual Networks
TLDR
This paper conducts a detailed experimental study on the architecture of ResNet blocks and proposes a novel architecture where the depth and width of residual networks are decreased and the resulting network structures are called wide residual networks (WRNs), which are far superior over their commonly used thin and very deep counterparts. Expand
End-to-End Object Detection with Transformers
TLDR
This work presents a new method that views object detection as a direct set prediction problem, and demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Expand
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
TLDR
This work shows that, by properly defining attention for convolutional neural networks, this type of information can be used in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. Expand
Learning to compare image patches via convolutional neural networks
TLDR
This paper shows how to learn directly from image data a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems, and opts for a CNN-based model that is trained to account for a wide variety of changes in image appearance. Expand
A MultiPath Network for Object Detection
TLDR
Three modifications to the standard Fast R-CNN object detector are tested, including a skip connections that give the detector access to features at multiple network layers, a foveal structure to exploit object context at multiple object resolutions, and an integral loss function and corresponding network adjustment that improve localization. Expand
Scaling the Scattering Transform: Deep Hybrid Networks
We use the scattering network as a generic and fixed initialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providingExpand
DiracNets: Training Very Deep Neural Networks Without Skip-Connections
TLDR
A simple Dirac weight parameterization is proposed, which allows us to train very deep plain networks without explicit skip-connections, and achieve nearly the same performance. Expand
Scattering Networks for Hybrid Representation Learning
TLDR
It is demonstrated that the early layers of CNNs do not necessarily need to be learned, and can be replaced with a scattering network instead, and using hybrid architectures, this fact is used to train hybrid GANs to generate images. Expand
A MRF shape prior for facade parsing with occlusions
TLDR
A new shape prior formalism for the segmentation of rectified facade images that combines the simplicity of split grammars with unprecedented expressive power and demonstrates state-of-the-art results on a number of facade segmentation datasets. Expand
BENCHMARKING DEEP LEARNING FRAMEWORKS FOR THE CLASSIFICATION OF VERY HIGH RESOLUTION SATELLITE MULTISPECTRAL DATA
TLDR
The experimental results demonstrate the great potentials of advanced deep-learning frameworks for the supervised classification of high resolution multispectral remote sensing data. Expand
...
1
2
...